Why do people host LLMs at home when processing the same amount of data from the internet to train their LLM will never be even a little bit as efficient as sending a paid prompt to some high quality official model?

inb4 privacy concerns or a proof of concept

this is out of discussion, I want someone to prove his LLM can be as insightful and accurate as paid one. I don’t care about anything else than quality of generated answers

  • 3abas@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    No, they said they “ruled out” privacy for “obvious reasons”.

    Obviously mockable statement.