I worked on one where the columns were datanasename_tablename_column
They said it makes things “less confusing”
I worked on one where the columns were datanasename_tablename_column
They said it makes things “less confusing”
It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.
Most tpu’s don’t have much ram, and especially cheap ones.
Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.
For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.
And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.
As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.
So no, you’re not loading all the notes directly, and you won’t have a smart model.
For your hardware and use case… try phi3-mini with a RAG system as a start.
You realise there is no algorithm behind Lemmy, right?
Of course there is. Even “sort by newest” is an algorithm, and the default view is more complicated than that.
You aren’t being shoved controversial polarizing content subliminally here.
Neither are you on TikTok, unless you actively go looking for it
I’ve seen Skype do that. It was a weird folder name, but gallery found it and displayed the images.
Which is how I noticed it in the first place
I wonder what bpm Moby’s thousand starts at… Maybe it can reach both limits
Who are you?
What do you want?
Also, I think good and bad is a bit fluid there. It’s just people with different agendas. Well, except emperor Cartagia. And perhaps Bester.
Hah as if. In the early 00s the mods were in maybe once or twice a day and there was tons of CP being posted.
Worst I saw was a little girl chopped into pieces, and a many -page discussion / argument if it should be sorted as CP or Necro porn. That was the old 4chan.
Even for 4chan that’s fucked up.
Oh, sweet summer child…
On occasion their strategy has been “if we send in enough people, they’ll eventually run out of bullets”
They out-Zapp Brannigan’ed Zapp Brannigan. That should terrify you on multiple levels
Koboldcpp
I’m waiting for them to find a better spot on durability, weight /bulkiness and hardware like cameras.
They’re still too big and bulky for me, the other components are usually a bit behind, and the screen durability seems a bit too eeh still.
Which is to say, I’m interested in one, but they’re not there yet for me.
I gotta ask… were you around and actively using xmpp around that time?
Because I was. And xmpp struggling had nothing to do with Google
Goatse, for the connoisseur
deleted by creator
Hilarious for a system which main point / feature is photo backup