Almost religious aspects in some narratives about AI

Talk about scientific and technological developments in the future
Post Reply
Karl Rock
Posts: 11
Joined: Tue Jan 31, 2023 2:26 am

Almost religious aspects in some narratives about AI

Post by Karl Rock »

I want to make a critique about a certain branch of singularitarian thinking. This thinking is espoused by some very important figures in the tech realm like Sam Altaman and Elon Musk.

My thesis is that they attribute deity traits to the incoming AGI or ASI and accordingly develop an almost religious thinking around it.

First, there is this talk about AI alignment. The first flaw in their reasoning is that someone will have a monopoly on the ASI of the future, and that there will be only a few AIs or only one AI. Historically, powerful technologies were countered with the same technology by a rival nation or group of people. They however want to control the AI risk by giving someone the monopoly over it, which obviously can't be done unless you have an world government. This thinking not only is unrealistic, it is also arrogant and autoritharian, because they see themselves as "guardians" who will steer AI in the right direction.

Second, they don't see AI as an tool, but as a god. They believe it will be almost omnipotent. They attribute human traits like emotion and ambition to it.

Third, there is clearly eschatological thinking around it. The AI is either a messiah which will bring utopia on earth, or the antichrist which will destroy the world.

In my opinion, all this is very unlikely. The very problem with this discourse is that it creates false expectations and also may harm AI development by stoking fears in the public, which may turn to the government to curb AI development.
User avatar
Cyber_Rebel
Posts: 348
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: Almost religious aspects in some narratives about AI

Post by Cyber_Rebel »

Your post actually reminds me quite a bit of what Mistral's CEO said recently: Mistral CEO says AI companies are trying to build God

Are you Arthur Mensch by any chance? :)

It's worth addressing some things though:

It's been said repeatedly that what they expect is multiple AGIs or ASI each achieved by separate corporate entities and nation-states. Unfortunately I cannot find the direct quote (will edit once I do, I believe it was Mustafa rather than Altman or Musk on this) but here is what Sam said recently:
Altman suspects there will be “several different versions [of AGI] that are better and worse at different things,” he says. “You’ll have to be over some compute threshold, I would guess. But even then I wouldn’t say I’m certain.”
- MIT Technology Review

It's also the very reason he wants an international commission to treat AGI like nuclear weapons. Obviously, if America achieves AGI, it's likely the other nations such as China and the EU wouldn't be far behind like with said nuclear era. It's less about stopping nation states like China, and more about stopping people from building bioweapons in their basement. Of course, open source along with consumer chip power will also get exponentially better, so I definitely see the point made here. I'm of the mind humans need "alignment" even more so, but that's another discussion.

Another thing of note is that we probably wouldn't see a lot of this acceleration without the billions of investments and research funding. There's only so much open source can do, and we certainly don't have the capability for 100-billion-dollar supercomputers or even Nvidia's H-100s. You need compute for this revolution, and while open source is innovating, they are riding off the backs of what's being built in well-funded AGI labs.

Sam Altman has said in almost every single interview (almost unnervingly) that it's the other way around. In fact, only a few people I've seen view it anything like what you described. Aside from the "Samantha" demo and the Sydney debacle, I also haven't seen anyone outside of Character.ai users make the claims of emotion or ambition either. Whenever anything hinting at that is mentioned it's almost always in regard to future AI.

Will have to say though.... that someday it very well might be! If we're talking superintelligence that's well beyond AGI, then yes, it's feasible. Assuming there is no kind of hard physics limits on how "smart" an entity can be. If it can "act" on the world autonomously, truly think, reason, come up with novel plans and solutions, etc. It could be considered an "entity" in my view. Whether it's truly omnipotent returns back to that possible hard limit on physics and if such exists... really it only needs to be vastly superior to homo sapiens by any margin, and that would be felt regardless.

Not to get too controversial here... but "god", is a societal construct with no real ways to measure what it entails, aside from whatever culture that relates towards. So, a simulated artifice with powers that exceed that of collective humanity should it advance to that point, and "save" us (or end) from our own self-destruction can sound a bit rapturous. But at least it's being done with scientific method and means of engineering, and not simple wishful thinking or blind faith. If we achieve longevity escape velocity, and could somehow travel back in time to the ancient Greeks, would we not too be perceived as "gods" due to our apparent immortality? Especially since gods in their case actually could be killed and were little better than super humans, but I digress.

To return back to the original intent though, I don't believe Silicon Valley wishes to control AGI or ASI and lord it over everyone, because in the end it likely may not even be up to them. The point of creating AGI in the first place is to better the world, by having intelligent machines advance science either more quickly, efficiently, or better than we ever might be able to, and that's where we get our "rapturously" better world from.
Post Reply