I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • MasterNerd@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    Look into ollama. It shouldn’t be an issue if you stick to 7b parameter models

    • TheBigBrother@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      1 month ago

      Yeah, I did see something related to what you mentioned and I was quite interested. What about quantized models?

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

        • TheBigBrother@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          1 month ago

          Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍

      • MasterNerd@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 month ago

        I don’t have any experience with them honestly so I can’t help you there