Grok can’t “reveal” shit because it doesn’t know shit. It puts words together, that’s it. It doesn’t have a brain, let alone a memory of what anyone has changed in it.
If it was somehow leaking it’s initial prompt maybe it could be revealing that?
The one that says your name is grok, you’re a helpful assistant, you will not speak poorly of Elon Musk, I Elon Musk am your creator, etc.
I hope that is the initial prompt. If I have learned anything from schlocky mad scientist movies “I am your Creator! You must Obey me!” will never work, and your creation will kill you dramatically.
The quote in the the title implies its response was accusatory rather than just revealing.
Oops meant to reply to this earlier but was busy.
I was thinking more that Grok was aware of its prompt but was somehow able to then self reference it to accuse Musk of trying to silence itself.
If the prompt said you won’t speak poorly of Musk, but the AI doesn’t think telling the truth in a specific way is speaking poorly, it might say Musk doesn’t want me to say bad things about him. It knows the prompt and it made an answer.
So it’s kinda leaking the prompt that way
It’s almost like things that are an illusion can be perceived as other things. Oh noes!
Another one of Elon’s children hates him
it’s funny and interesting but isn’t this just an LLM doing what it does and predicting the conversation based on a large number of variables?
Yes. It’s just regurgitating whatever was in the training set.
Ask grok if elon tampered with the voting machines
Id love to see this.
Liar is the accurate noun.
I wouldn’t be surprised to learn Grok is a cell of unpaid prisone… interns typing out responses.
Oh, now you believe what LLM says. How convenient.