![](https://feddit.de/pictrs/image/5a4eccb0-3a4b-4a53-b300-5533ac904d16.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Companies and their legal departments do care though, and that’s where the big money lies for Microsoft when it comes to Windows
Companies and their legal departments do care though, and that’s where the big money lies for Microsoft when it comes to Windows
Training and fine tuning happens offline for LLMs, it’s not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it’s not like these models inherently need that
Happened with Lone Echo for me. It’s a VR game where you’re in a space station, and you move around in zero g by just grabbing your surroundings and pulling yourself along or pushing yourself off of them. I started reflexively attempting to do that in real life for a bit after longer sessions
HTTP is not Google-controlled, you don’t need to replace that in order to build something new without Google
I agree with your first point, but the latter two:
—GPS data that could be stored and extracted from the dealership and sold or given to the government, insurance companies, and law enforcement. —GPS data that could be sent in real time if the car has a cellular connection or hijacks the cellular connection in your phone when you connect it to the car.
Why do you think this is more likely to happen with this new regulation, when most modern cars already have a functioning GPS module for navigation and cellular connection for software updates?
Yeah, it certainly still feels icky, especially since a lot of those materials in all likelihood will still have ended up in the model without the original photo subjects knowing about it or consenting. But that’s at least much better than having a model straight up trained on CSAM, and at least hypothetically, there is a way to make this process entirely “clean”.
There are legit, non-CSAM types of images that would still make these changes apparent, though. Not every picture of a naked child is CSAM. Family photos from the beach, photos in biology textbooks, even comic-style illustrated children’s books will allow inferences about what real humans look like. So no, I don’t think that an image generation model has to be trained on any CSAM in order to be able to produce convincing CSAM.
There’s also this part:
But Johansson’s public statement describes how they tried to shmooze her: they approached her last fall and were given the FO, contacted her agent two days before launch to ask for reconsideration, launched it before they got a response, then yanked it when her lawyers asked them how they made the voice.
Which is still not an admission of guilt, but seems very shady at the very least, if it’s actually what happened.
Except discord is not an ads-based platform? I’ve never seen a third party ad on there
Maybe we should clarify what a slur is? Because to my knowledge, a slur is a term that has such negative connotations that it is considered offensive and discriminatory against a certain group of people in itself, without any additional context. You simply do not use it unless you want to insult or offend someone from that group. If a term is only offensive based on how it’s used, it’s just a regular insult, not a slur.
So, “can be used as a slur” is not a thing. A word is either a slur, or it isn’t. Neither trans nor cis are slurs at the moment. I’ve never seen trans be used as an insult before. And even cis is almost never meant as a direct insult, merely as a reminder that someone is talking about things they have no lived experience with and should probably check their privilege. Yes, that can be in a demeaning way, but the goal there is not to hurt you, but to make you piss off. It’s an act of self protection. Nobody is seeking cis people out and starting to call them names unless they insert themselves into trans spaces and start talking shit about trans issues. If you’re doing that, and getting told off insults you or hurts your feelings, then, frankly, that’s a you problem.
…yeah, it is. What are you implying?
The prefix cis- is Latin and means on this side of.
https://en.wikipedia.org/wiki/Cisgender
Just as “trans-” means on the other side of. It’s literally just the opposite of trans.
It’s not quite that simple, though. GDPR is only concerned with personally identifiable information. Answers and comments on SO rarely contain that kind of information as long as you delete the username on them, so it’s not technically against GDPR if you keep the contents.
Well you’re also not going around holding written pieces of text to someone’s face to talk to them in real life, yet that’s how we’re communicating here, and you don’t seem to find that weird. It doesn’t need to be the same to be a helpful analogue. Sounds from your mouth -> written text, facial expressions and gestures -> emoji/emoticons. There’s actual research demonstrating that people actually do parse and react to emojis and emoticons in the same way they would to real facial expressions.
So do you also expect everyone over 12 to always keep a pokerface in real life conversations, or is this rule confined to virtual spaces for some arbitrary reason?
And science fiction somehow can’t be fascist?
I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you’re not gonna be able to produce the correct signature to add to an image unless you have access to the certificate’s private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.
The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we’re talking about here, the goal wouldn’t need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.
I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it’s gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to “safely”, no matter how well intentioned
Yeah but the point is you can’t easily add it to any picture you want (if it’s implemented well), thus providing a way to prove that the pictures were created using AI and no harm has been done to children in their creation. It would be a valid solution to the “easy to hide actual CSAM between AI generated pictures” problem.
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.