The article is meh, but what is said in there is pretty much true.
Unregulated technology is something we have now : ai generation for example.
What is happening right now with the ai generation be text or image, is that they are the most privacy invasive thing there could be.
For example chatgpt/Bing chat, or Google bard. They take every bit of text, analyse it and use it for future prompts.
User do not know because it’s too long to read their huge privacy policy.
And so people are puting in those prompts private elements, trade secrets, and elements they should not put in there.
Image generation is currently a huge copyright issue.
Worldcoin deployed balls, installed like art in multiple cities, with a camera to scan the eyeballs (and so their identity, as it seems to be unique for each person) of every person who just gazes into that lens. https://news.artnet.com/art-world/worldcoin-orb-ai-2341500
So non regulated tech is a huge anti-people / pro money mess.
For brain interfaces it may even be worse. Companies could just put ads in your brain directly, and everything that was described in that article.
I saw a little part of a movie some long time ago, no idea what it was.
There was some brain interface allowing communication and displaying images.
When just enabled it was a huge mess adverts everywhere, noise ads, so bad that it would make that character unable to think and know what he wanted to do.
In order to get back into what he had to do he had to use dampener to remove all those ads and noise, all the mess.
And the first part with so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.
And we may not have such brain “ad blocking” tech as in the movie.
The data broker industry is a far better example of the dangers of unregulated tech than ai generated works — which is more a rebuke of copyright and a threat to labor than a privacy invasion.
It may be even worse as you said, however AI currently is more present in the news and maybe easier to understand because of this.
Also chatgpt had a huge amount of personal info leaked to the dark net, not really because they got hacked, but because the users put their login credentials into fishing websites.
But also, as any thing you input into chatgpt/Bing chat/bard is scanned, it can also be a big antitrust/corporate espionage as openai/microsoft and Google may be able to spy on any users who may leak the development of another AI.
The article is meh, but what is said in there is pretty much true.
Unregulated technology is something we have now : ai generation for example.
What is happening right now with the ai generation be text or image, is that they are the most privacy invasive thing there could be.
For example chatgpt/Bing chat, or Google bard. They take every bit of text, analyse it and use it for future prompts.
User do not know because it’s too long to read their huge privacy policy.
And so people are puting in those prompts private elements, trade secrets, and elements they should not put in there.
Image generation is currently a huge copyright issue.
Worldcoin deployed balls, installed like art in multiple cities, with a camera to scan the eyeballs (and so their identity, as it seems to be unique for each person) of every person who just gazes into that lens. https://news.artnet.com/art-world/worldcoin-orb-ai-2341500
So non regulated tech is a huge anti-people / pro money mess.
For brain interfaces it may even be worse. Companies could just put ads in your brain directly, and everything that was described in that article.
I saw a little part of a movie some long time ago, no idea what it was. There was some brain interface allowing communication and displaying images.
When just enabled it was a huge mess adverts everywhere, noise ads, so bad that it would make that character unable to think and know what he wanted to do.
In order to get back into what he had to do he had to use dampener to remove all those ads and noise, all the mess.
And the first part with so much noise that we cannot do anything from our own will is what will happen for non regulated brain interface after enough people have adopted it.
And we may not have such brain “ad blocking” tech as in the movie.
The data broker industry is a far better example of the dangers of unregulated tech than ai generated works — which is more a rebuke of copyright and a threat to labor than a privacy invasion.
It may be even worse as you said, however AI currently is more present in the news and maybe easier to understand because of this.
Also chatgpt had a huge amount of personal info leaked to the dark net, not really because they got hacked, but because the users put their login credentials into fishing websites.
But also, as any thing you input into chatgpt/Bing chat/bard is scanned, it can also be a big antitrust/corporate espionage as openai/microsoft and Google may be able to spy on any users who may leak the development of another AI.