One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.
Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don’t yet know.
But with LLMs, unlike blockchain-based toys, it’s funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.
I don’t know any managers of such level and can’t ask what exactly they are optimistic about and what exactly they see in that technology.
I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.
Maybe they really, honestly, want to believe that they’ll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it’s people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.
But so many cunning people can’t be so stupid, so there is something we don’t see or don’t realize we see.
This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.
Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don’t yet know.
But with LLMs, unlike blockchain-based toys, it’s funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.
I don’t know any managers of such level and can’t ask what exactly they are optimistic about and what exactly they see in that technology.
I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.
Maybe they really, honestly, want to believe that they’ll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it’s people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.
But so many cunning people can’t be so stupid, so there is something we don’t see or don’t realize we see.