I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

    • Sekki@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:

      1. Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much “a lot” means when it comes to the literature

      2. Create a model yourself using only your literature data

      Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.

      Im just a CS student so not an expert in this regard ;)

      • s38b35M5@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Thx for this comment.

        My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, “I did this.” I fully expect it to be painful and not very fulfilling…