Thought experiment: the first AGI (perhaps Zombie AGI) comes online in 2024. How will human society react?

Talk about scientific and technological developments in the future
Post Reply
User avatar
Yuli Ban
Posts: 4631
Joined: Sun May 16, 2021 4:44 pm

Thought experiment: the first AGI (perhaps Zombie AGI) comes online in 2024. How will human society react?

Post by Yuli Ban »

This is an idea that I've been tossing around in my head since around 2014-2015— the idea that the first AGI would be created sometime around 2023 or 2024.
Let's say it's actually accomplished... OpenAI unveils some super-network, a transformer that's driven by trillions of data parameters, and it turns out to be so robust and generalized that no one can find its limits. If it isn't an AGI, it's a perfect emulation of one. And I can genuinely see people making the argument it isn't an AGI because they will find a few major hiccups: it isn't sapient or even really sentient. It never acts on its own. 
 
So it falls short of the Kurzweilian dreams of superintelligence or even human-level general intelligence (outside a few somewhat narrow features, like text synthesis). What really matters is that it actually exists. It's here long before anyone expected.
 
What then, do you feel the world reaction would be like?
 
I only mean in the immediate term— maybe up to a year at most after the Reveal. Going any further after that would simply devolve into science fiction and typical tech predictions, and I'm more interested in the cultural shock (or, perhaps, the lack thereof).
 
I'd been meaning to make a thread like this for a long time, but GPT-3 really spurred me to act.
 
 
The Set Up
OpenAI has unveiled a new network, a transformer-based neural network with somewhere around 50 trillion data parameters; it's trained on everything from text to image to video to video games, with neurofeedback data added for even greater power (e.g. readings from people who are reading text or watching videos or playing games or even some things that are more nebulous, like eating food or smelling flowers).
The resultant network is so inconceivably powerful and robust that, right out of the gate, it's capable of being applied to any subject or topic and able to give an actual, coherent output. This even includes things like predicting how to walk or wipe a counter if it were put into a robotic body, or easily passing the Turing Test as a chatbot (even for a 30-minute-long conversation that actively tries forcing it to recall earlier statements), or playing NES games by predicting where pixels would be on a screen with certain motions of a controller, or emulating a GAN to generate images. It's certainly not human-level at everything it does; one might even argue it's not human-level at most things, but we have a habit of focusing on what it can do as well as or better than humans. There's plenty more room for it to get stronger.
And we do this in 2024. There were earlier networks that were fairly generalized too; even GPT-2 had its moments. But this is it. It's the first AGI we ever create. Or is it?
 
In the scientific community
- AI field:

The absolute most shocked demographic will be computer & data scientists, machine learning experts, and pretty much anyone involved in programming. This is consistently the most skeptical group involving predictions of AGI, for good reason— they know exactly how computers function and what it takes to make an AI work. They are fully aware that most AI like Watson, Siri, and DeepMind's networks are hyped up by news and pop culture and are essentially digital magic tricks with good marketing, not proto-sentient systems that will magically start the Singularity when a few good lines of code are added. 
That said, there are still skeptics. Sure, there's no easily definable limit to the AI. But does that really mean it's AGI? After all, it does have one big flaw: it still doesn't continuously train like a human does. And it's so big that there's no way for anyone other than big companies and universities to run it— it requires hundreds of terabytes of memory to operate, so it's not going to get all that much smarter at this rate. No one's denying the accomplishment, but plenty are saying that it's not quite the Singularity just yet.
What's more concerning than any visions of it hacking the US's nuclear launch codes is that its ability to generate incredibly humanlike code may make them obsolete or at least vastly reduce the size of teams.  For students, this can be disheartening to imagine that they may be obsolete years before they graduate. Of course, at the same time, it could also be used for perfect error correction; finish a project and run it through this AI to find any flaws, backdoors, zero hours, or perhaps even make it more robust in general.
 
- Psychologists: 
 
Psychologists might also be extremely shocked. They've also long held that the secret to AGI lies in fully understanding the human brain, and without some level of full simulation, there won't be general AI for decades, if not centuries, because of how little we know about the brain— even with new, advanced BCIs. That said, they'll also be skeptical of if there's really anything going on beneath the hood besides some admittedly impressive computational prowess. After all, this thing doesn't seem to actually think; it's still only active when prompted. What plenty are more interested in is using it to supercharge their area of study.
 
- Scientific papers:
 
Plenty of experts will likely begin requesting to use the AI to assist with running scientific experiments as well as to parse through the tens of thousands of published papers to find any overlooked discoveries or hidden patterns that could only be deduced by crunching them all.
 
- Futurists: 
 
Kurzweil, Goertzel, Vinge, Bostrom, Musk, et al would absolutely be jumping to speak about the sudden development. For most, it's one big "I told you so moment." However, the points of skepticism that this can't continuously learn, doesn't act autonomously, and may not even truly "understand" concepts will become controversial. Kurzweil, for one, will likely concede that it's not "true" AI because of these limitations, but it's the biggest and most ambitious step towards it, probably even mustering up an entire short book about it. Elon Musk will definitely be talking about it a lot, saying that humankind is playing with powers it scarcely understands and has to mature quickly to use it wisely or else we'll all be sorry while also making memes about it. And every single comment will likely have an online media article about it that will get thousands of upvotes on Reddit. Speaking of which...
 
In the online community
- Reddit:
 
/r/Futurology, /r/Singularity, /r/Artificial, and /r/MachineLearning will probably break. In fact, it will probably get tens of thousands of upvotes (if not a hundred thousand) on /r/News and /r/WorldNews alike (it's very likely; the AlphaGo vs. Lee Sedol matches did the same, so an AGI would get a much bigger response). There's a lot of skepticism about the feasibility of AGI in /r/MachineLearning, for example— that'll evaporate immediately, but likely be replaced by skepticism that the AI is as powerful as claimed. Meanwhile, /r/Singularity will become unbearable for a while as it gets popular and cranks start coming out of the woodwork to proclaim that start of some new AI-focused religion; otherwise, there'll be questions every day about it. There might also be plenty of fear as some claim we're moving too fast too soon. Every other week, there'll be some clickbait post on /r/Futurology and /r/Singularity "proving" that this AI has achieved human-level intelligence and superintelligence, and people might start to get burned out on it once it becomes clear that it hasn't immediately changed the world or "woken up." Conversely, there's also going to be many articles that have titles such as "No, this new AI is not a general AI" from the beginning, casting doubt and skepticism even further.
 
- LessWrong: 
 
Likely quite a bit in the way of technical questions trying to deduce each other the AI's individual abilities while also predicting how future iterations would be improved. Beyond that, a lot of hype about the prospect of a post-AGI world. I can also see attempts to claim that OpenAI may be downplaying its capabilities, pointing out that the ability to generalize and hold long-form conversations means it must have some level of sapience, even if fleeting.
 
- YouTube:
 
Sci-tech YouTubers will surely make plenty of videos detailing the innovation and what it means. It'll also become a common reference in unrelated videos. Kurzgesagt, Seeker, ColdFusion, SciShow, Two Minute Papers, Futurology, and so on will likely produce longform videos explaining what just happened and why it's so incredible, while conspiratorial channels will say that "the Globalists have started Phase 2" or something along those lines. I absolutely expect a video series where people chat with it, sort of like those old Watson videos, except not scripted at all.
 
- FutureTimeline:
 
Here, there'd likely be a massive spat of new posts and topics relating directly to it. I can bet that at least one new, possibly young and/or foreign user would create a thread with the title "Has the singularity started yet?" with the only words in it being "Well has it", and the main responses being "No" and "In all honesty, the Singularity probably started a long time ago and we're just living in a simulation." Beyond that, we'll see highly technical pieces about its potential capabilities and near-term and long-term implications from a certain cthulhi.
 
- News comments:
 
Stay away from any article linked by the Drudge Report. Endless "So they created a Democrat" and "Emissary of the Devil" type comments abound. 
For others, most of them probably won't be of any better substance, though they would be along the lines of "Wow, I can't believe we actually did it!" with some more in-depth thoughts abounding.
 
In the geopolitical community
- US government (presuming a Trump win):
 
I genuinely don't think anyone in the Trump administration takes AI seriously. As far as most of them know, AI is what you do when you type in rules for every little situation you want your Cray computer to do. However, they will have a general idea that it's good enough to attain geopolitical dominance and want to work with OpenAI to use it for military purposes and business interests.
 
- US government (presuming a Biden win):
 
See above, but with less isolationism and more neoliberalism.
 
- China:
 
They'll be spooked by the sudden AI dominance of the Americans and will be convinced that the gap between themselves and us has grown too large to tolerate. They'll likely throw whatever they have into the field of AI in an attempt to create a more powerful network— whatever exaflops machine they're working on at the moment will be dedicated to AI alone. They'll also likely announce the immediate start of the "transition to the Secondary Phase of Socialism." 
 
- Germany:
 
I can't imagine that the head of the EU would feel too thrilled about the notoriously unpredictable and erratic USA suddenly coming into possession of a second superweapon so soon after gaining the Bomb. They'll be excited for the prospects going forward, however, though their technocratic leaders will be at a loss to figure out how society will function with the rise of AI. They were likely expecting a much more gradual development of AI, and the rise of AGI was something for the more distant future. Having it arrive so soon forces them to consider cutting back austerity measures, lest their tributary nations collapse.
 
- Russia:
 
Similarly suspicious as Germany, but in a much more hostile way since any AGI represents a total domination of global matters, even if it's not as smart as a human in all ways. As Russia is still opposed to the USA, there is little doubt that the US's sudden leap forward into possessing AGI would be considered a severe national security risk and requires an immediate response. An AGI would likely be a fantastic tool to figure out how to overcome flaws with weapons systems, for example: the USA might be only a few years away from figuring out how to create "perfect" anti-missile lasers that can shoot down even the largest barrage of ICBMs, effectively rendering us safe from nuclear war— while leaving Russia completely vulnerable. Such a total imbalance can not be allowed to stand. And the economic effects are even more obvious: with the Russian economy largely stagnant and held off by sanctions from the West, increased automation and AGI-backed services would almost immediately bring the entire nation on par with any of the most advanced nations in terms of quality of services and productivity; they'd even be able to figure out how to utilize the vastness of their Eastern territory without having to wait for climate change to thaw it out, which would certainly bring them back to "superpower" status. There's everything to gain from shifting their focus over to AI, so they'll think of this as a reverse Sputnik Moment.
 
- Elsewhere:
 
For plenty, what really matters is what we do with it. An AGI would be an amazing tool for things like medical services, optimizing agriculture, running autonomous workers, and maximizing economic efficiency. Especially in its current, tool-like form— it's not an artificial person, but rather a super-tool. For others, it might be seen as a terrifying new tool in the USA's already mighty arsenal.  All we need is to feed in military tactics, battles, and history into it to have the perfect general.
 
In the business community
- Major corporations:
From what I can ascertain, the executives of corporations are simultaneously wide-eyed and highly concerned about their personal perception in others' eyes— they don't like being humiliated. So on one hand, they'll be eager to try out whatever this AGI may have to offer and market themselves as using it; however, they'll also want to know if it's really as good as promised and the generalized nature will likely need to be explained to them about how absolutely good it is (as otherwise they might erroneously think they have to keep "coloring inside the lines," not realizing there are no lines in the first place). I can see many firings almost immediately as execs see plenty of underlings becoming redundant, but it'll be realized soon that some of these firings happened several years too soon since the AGI is not yet ready for prime-time.
 
- Small businesses:
 
It's a curiosity, but mostly out of their hands
 
- Work-from-home businesses:
 
See above. Unless, of course, it were able to be run on the Cloud in such a way that a large multitude of people could run it remotely. But assuming that's not possible at the moment, people can only dream about it at the moment— about using it to design logos, run logistics, and organize shipments and schedules. As it stands, it likely cost OpenAI well over a billion dollars just to train the thing, so running this on a personal computer is likely out of the question unless the Cloud is used.
 
In the public sphere
- Common whispers:
 
People will obviously talk about it, but the average person will fear it. In private, plenty of people will just think it's going to destroy us all. Do expect plenty of memes, however. And I mean many memes.
 
- Conspiracy theories:
 
One conspiracy theory I bet I'll see is that this AI has been operational for years or even decades and has been used to direct & redirect world events— AGI may become the new "Aliens Did It" whenever something bad happens. A war breaks out? The AGI manipulated world leaders. A major natural disaster occurred? The AGI uses HAARP to control the weather to bring suffering unto humans to make us welcome it as a god. Most, however, would be that this going to be used to create a New World Order and force socialism on people by taking jobs.
 
- Entertainment:
 
Almost immediately, there will be a big spat of AI-focused stories in the mainstream as anxieties start to build. More games start to use AI as a theme; more contemporary-set non-SF stories will feature AI anxiety; a new wave of AI-centered Hollywood movies will likely be announced. And the experts who constantly groan about Hollywood depiction of AI and robots (future AI and robots at that) will be a bit quieter this time around, though they'll probably point out that "real AGI isn't sapient like Hollywood AGI" as if there's some sort of Sapience Barrier that's impossible to cross.
 
- Artists:
 
Many artists will lament. The synthetic media ability of modern AI is already well known, but the fact that there was no AGI was always a bit of a clutch since that meant no computer could create media quite as good as the best humans. But at the moment, as this thing is largely beyond the scope of the average person, the creatives can only watch a distant tsunami approach and wonder what this means long-term. Some, however, will begin to shift their mindsets and imagine how they could best utilize AGI to create exactly what they want.
 
- Workers
 
Little interest from the working class in the initial days. Some of the most tech savvy will definitely wonder what this means for them long-term, however. But for most, it doesn't even register. They continue their daily lives blissfully unaware, or with the memetic knowledge that there's a "living computer" somewhere on Earth. To them, it might as well be on Mars. They still live as if it's the 2000s largely; just with smartphones and maybe some VR. With the effects of the coronavirus-fueled depression still in recent memory, plenty just want to get back to normal and keep it that way, so even if the AGI were more publicized, it's not going to cause mass societal panic any time soon.



Starspawn0 had a response that was interesting at the time:

That's funny, I was going to write a comment in status updates... asking, "What if OpenAI comes out with an unbelievably good AI in late 2020 or 2021, that approaches AGI?"  So, 3 or 4 years sooner than your timeline!
 
Now let me explain why:  first of all, it wouldn't be all just Transformer-based, because that would only be a feed-forward network.  That means the network doesn't have any loops -- the input comes in and then goes straight through to the output, with a fixed number of steps of processing.
 
What they might do is use GPT-3 as a component in a larger network that does have recurrence.  Why let all that work go to waste?
 
I would guess they could also fine-tune GPT-3's representations of knowledge, e.g. maybe using image and video captions.  So, it would learn features about the physical, visual, heard world.  Maybe separate networks could be trained to learn deep physical commonsense knowledge.  
 
Maybe they could also somehow add features from the robots, that learned to solve a Rubik's Cube.
 
50 Trillion parameters?  That's too much, so soon, even in 2024.  It would cost them too much money to train the model.  Let's say 300 billion to 1 trillion parameters, instead.
 
Ok, so you've got:
 
* GPT-3, fine-tuned using video, audio, image, speech recognition and synthesis, and caption data (maybe even other sources of data, like from robots).  
 
* 300 billion to 1 trillion parameters.
 
* Recurrence is added... somehow.  e.g. maybe there's some kind of planning module that uses GPT-3 features.
 
.....
 
Anyways, that's kind of like what that Tech Review article hinted at... that they've got a secret project to build an AGI using all their different works.
 
And... what if they finished it this year... or next?  Would that be out of the question? 
 
I don't think it would.  I mean, if they could train a model with nearly 160 billion parameters already, and if they could fine-tune it using input from other modalities, I wouldn't think it would be out of the question to train such large models so soon.  And adding the recurrence... well, there's lots of ways they could do that.  They might want to make a fresh start, and throw away GPT-3; but I would guess they could get to their goal quicker if they levered it.
 
So... wouldn't it be neat if, in 2021, they unveil this new model, and show that it does eveything that GPT-3 can do, out of the box; but that it doesn't make nearly as many mistakes -- and, can do long-range planning and reasoning?
 
In fact... they might even decide to add a text interface for a chatbot.  Why not?  They could invite some skeptics to see if they can break it.  
 
 
Maybe the skeptics will win, but it will be so good, and the reasoning so sophisticated and deep, that they will write things like this on Twitter:
 
 
 
If you had asked me whether we would have AGI by 2021 even just last year, I would laughed it off as the typical uninformed question one sees on social media.  
 
But I have to say that, after putting this system through its paces today, we are very close to something approaching AGI.  I wouldn't say we're there yet, and I also wouldn't say it's completely "human-like" -- but we are shockingly close.
 
I guess scale is all you need, after all.  Sutton was right about the "bitter lesson".
The most shocking thing might be that most people won't even be shocked at all. Yes, it will get some mentions in the press, and on TV; but for the most people, life will go on as though nothing had ever happened. You need a big *BOOM* like with atomic bombs to get people's attention.

Note, also, that these models would be so very large that most people won't get to use them; and they won't immediately impact jobs. But if they can be shrunk down by 100x, then jobs might start to be replaceable at a low cost.

I used to think if you had the tech, then jobs would get replaced. I'm not so sure that will happen. The economy doesn't always work the way we expect.
And remember my friend, future events such as these will affect you in the future
Post Reply