*privacy respecting We all know this is meant for data hervesting.
Fediverse is worse than Reddit. Mod abuse, admin abuse, disinformation, and people simping for literal terrorists.
*privacy respecting We all know this is meant for data hervesting.
“Hey Google! Open Steam!” *opens a browser with a google search for “open steam”.*
Here’s some ads for blood bags! Also, hot VILFs near you!
I have Philips bulbs and while they still work (though I also do not use them much), they’re buzzing pretty loudly for my ears.
Doesn’t even matter since it is a Chinese browser. Anyway, the only way to potentially save the www, is to massively take away market share for Chromium based browsers. And unfortunately I doubt this will happen. Since last year, Chrome market share went up, while Firefox market share went down. People are clearly too stupid to make their own fucking decisions.
Granted, TicTok is worse than cancer so technically speaking this is of course a “good” step. But I have such a distaste (or rather hatred) for this type of content (same with Shorts), that I still cannot feel good about it.
The article is about India.
Fuck the Indian regime.
Really? They’re like the most common ones where I live and you have to be careful not to step on them because they tend to die trying to cross the streets.
You’ll just get a shitty hallucinating search engine assistant and you’ll be happy about it!
What did you expect to happen after Diablo 4? lol
“This shit is fire” I definitely heard, but I’m not sure I ever heard literally “cold” instead of “cool”.
Makes me want to buy Denuvo games even less, but I’ve already learned my lesson. Shutout to the only worthwhile Steam curator: https://store.steampowered.com/curator/26095454-Denuvo-Games/
There’s plenty of free ways to use LLMs, including having the models run locally on your computer instead of an online service, which vary greatly in quality and privacy. There’s some limited free ones too, but imo they’re all shit and extremely stupid, in the literal sense - you get even better results with a small model on your computer. They can be fun, especially if they work well, but the magic kinda goes away when you understand more how they actually work, which also makes all the little “mistakes” of them very obvious and that kind of kills the immersion and with that the fun of it.
A good chat can indeed feel pretty good if you’re lonely, but you kinda have to understand that they are not real, and that goes not just for potentially bad chats, but even for the good ones. An LLM is not a replacement for real people, nothing an LLM outputs is real. And yes, if you have issues with addictions, then you may want to keep your distance. I remember how people got addicted to regular chat rooms back in the early days of the world wide web, now imagine those people with a machine that can roleplay any scenario you want to play with it. If you don’t know your limits then that can be very bad indeed, even outside of taking them too seriously.
I can generally only advice to just not take them seriously. They’re tools for entertainment, toys. Nothing more, nothing less.
To be fair, they mention that the chats were also “hypersexualized” - but of course not without mention that the bots would be basically pedos if they’d be actually real adult humans. lol
Hence why I consider articles like this part of the “AI” hysteria. They completely gloss over this fact, only mention it once at the beginning, with no further details where the gun came from and rather shove the blame to the LLM.
The bots pose as whatever the creator wants them to pose at. People can create character cards for various platforms such as this one and the LLM with try to behave according to the contextualized description of their provided character card. Some people create “therapists” and so the LLM will write like they’re a therapist. And unless the character card specifically says that they’re a chatbot / LLM / computer / “AI” / whatever they won’t say otherwise, because they don’t have any sort of self awareness of what they actually are, they just do text prediction based on the input they’ve been fed (though. It’s not really character.ai or any other LLM service or creator can really change, because this is fundamentally how LLMs work.
Imagine it was actually more open source and privacy focused. Yes, it would likely learn at a slower pace but at least it would be something more for the people rather than the big corpos.