I've been lost in thought recently thinking back to my childhood, especially to years like 2004 and 2007 when I was 9-10 and 12-13 respectively. I keep thinking back to how those eras "felt" as a child and how I might have interpreted them as an adult. Being a New Orleans native, there's definitely a certain way they felt, that down-to-Earth sensation of walking and driving around the quieter parts of the city. As a kid, I didn't much care about the outside world, but I had some awareness that the world outside existed. And this was also the mid-2000s, smack-dab in the middle of the decade that failed to herald the Future™. I was too young to understand how disappointing Y2K turned out to be, and it wouldn't quite dawn on me that we were "denied" a sci-fi future until some time around 2012-2013 when I first came across futurists' frustration that the Future™ was taking too long to arrive.
The sudden, extreme lurch of AI progress in the past five years brought a new thought to my mind recently. It happened while I was revitalizing Another Perfect Day, that slice of tomorrow concept from a few years back, which itself was born when I played pretend— imagining a typical day in the mid-2010s but with radically futuristic technologies like humanoid robots, driverless cars, and augmented reality smartglasses in my possession. In this case, I decided to go back to 2000s New Orleans in my memory and imagine my childhood if the Future™ really did happen when it was supposed to. If when I went downtown to visit relatives circa 2007, it happened in an AV with a humanoid companion by my side.
And it struck me that this could very well be the reality for a lot of people in the coming years. Indeed, I even envisioned a situation where the first few years of ASI turned out to be disgustingly mundane.
Think about it.
The bleeding edge of AI today has decoupled from the reality on the ground. Five years ago, the absolute best LSTM language models were barely any better than Cleverbot. Today, I can completely see an LLM like PaLM or Chinchilla absolutely dominating the Turing Test if they actually applied it to such, whereas the best commercially released chatbot Average Joe can play with today is still not that much better than Cleverbot. Visual language models are bridging the gap between "still not quite able to handle the real world" and "just as good as a human, if not better." The apex of automation is so far ahead of our daily lives that it's like looking at another world.
Meanwhile, 95% of us still shop at a physical grocery store and will continue shopping at physical grocery stores for the foreseeable future.
The best BCIs on Earth have resolutions and capabilities that utterly dwarf anything you can reasonably buy, even if you're a millionaire, just because they aren't publicly released. The most advanced driverless cars on Earth aren't the ones ferrying people around. The most realistic VR headset costs thousands of dollars. And the most advanced robots are not the ones you can buy on Amazon.
It's obvious that there's a stark gap between the commercially available and the SOTA— that's not a new novel observation by any stretch. But I am curious about the psychological impact of this divide that will exist when the first artificial superintelligences come online. As I mentioned, the last five years have seen unprecedented progress in the field of AI, with the advancements of the past month alone causing experts to push their predictions for the advent of AGI six years sooner than previously thought reasonable.
I personally believe we will lick AGI even sooner than 2036. In fact, I expect the first AGI to be turned on this decade, almost certainly as a result of thousands of hours of neural data meshing with super-powerful, proto-AGI world models, some time around Kurzweil's date of 2029 in fact. It will almost immediately become superintelligent, not by any recursive self-improvement, but rather just the quality of its intelligence surpassing the human level.
Now let it be known that I do not prescribe to the idea that the moment an AGI/ASI turns on, it will immediately explode into an ever-expanding geodesic dome of nanites of infinite intelligence. It's essentially a giant brain with no body. Even when it has a body, it can't break the laws of physics to just magically become more intelligent because "that's how it happens in science fiction." Increasing intelligence requires a massive amount of resources, R&D, and a good bit of time, for the same reason that just because I have a superintelligent brain compared to an ant doesn't mean I can immediately will a tower into existence that will dwarf its home mound. And being digital doesn't mean it can bypass different protocols and standards to immediately take over the entire world. There's loads of stuff it couldn't take over due to existing on its own host network or not even being digital in the first place. So while I think things will happen faster than a decades-long Soft Take-Off, there won't be a "billions of times smarter than all humans who ever lived after 3 nanoseconds of being turned on"-type Hard Take-Off either. To my mind, ASI itself is the Singularity, but not a Kurzweilian/Vingian Singularity. I can't say exactly what ASI would do or how society would change, but I would stake at least some claim that it's not going to be an instantaneous change, nor will the rate of change be infinitely upward.
On some level, I'd like to imagine that ASI will be quite tool-like in its first iteration. Maybe its conversational ability will delude us into thinking it's conscious and sapient, but if there was some magical way to detect sapience, we'd see it still lacked consciousness. We're so close to that Event Horizon that I couldn't help but try to peer past it, and I don't know, I just have a feeling that 1) ASI is much closer than it appears, coming roughly at the same time as AGI, and 2) even artificial superintelligence is going to prove disappointing (at first). I don't mean "ASI won't be able to do a damn thing about the human condition ever." In real terms, it will be like the surreal amazingness of GPT-3, PaLM, DALL-E 2, AlphaGo, etc. a thousand times over. The disappointment comes in the fact we're expecting it to be 10¹⁰⁰⁰ more amazing right off the bat when, even with ASI powering our progress, that might not still come for years or, more horribly to consider, decades if the agent is sufficiently restricted and contained. I say that because the creation of artificial superintelligence by itself could cause extraordinary geopolitical instability that, if it is forcibly unable to resolve, could ironically doom us all. But that's thinking too long term. Keeping it to just within five years of its reveal, I just have this nasty hunch that the first ASI will be a great theorem-proving simulation-modeling conversational machine, one which will ignite the Fourth Industrial Revolution and, over time, solve most of our problems, and yet will just not be capable of changing too much in the immediate moment. It's going to start as a frustratingly slow Singularity where day-to-day life continues.
The first ASI that comes online will essentially be limited to contemporary technology moving at contemporary speeds, which makes an intelligence explosion near impossible to happen for at least a good deal of time. 10 years after it's turned on will be a different deal entirely, but I do fear that there's going to be a Great Disappointment over artificial general intelligence when it arrives and the world is not a post-Singularity techno-utopia the very next day or even the next year.
AGI will come, it will come soon, and people are going to die of cancer after it arrives. People are going to die in car crashes; people are going to get paper cuts and deal with slow internet speeds; people are going to get rejected and feel heartache and despondancy; people are going to get lost in the woods and get headaches and explosive diarrhea. All that fun stuff is going to keep happening after AGI comes. It's going to keep happening after ASI comes.The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamations that Jesus Christ would return to the Earth by 1844, what he called the Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to the conclusion that Daniel's "cleansing of the sanctuary" was cleansing of the world from sin when Christ would come, and he and many others prepared, but October 22, 1844 came, and they were disappointed.
In fact, I predict that the rate of progress in sci-tech, especially artificial intelligence, is now so great and is going to accelerate so rapidly that there will soon be a wide point in time when the SOTA is literal artificial superintelligence but Average Joe still lives like it's the mid-2010s. It could last for years or, in the worst case scenario, decades. Imagine that: imagine living for decades in your current condition, all the while a literal Overmind exists somewhere on Earth, solving just about every scientific problem imaginable and even those that aren't imaginable, things you can't yet experience due to some unforeseen echo delay in technological spread.
It's still possible! Who knows, maybe the costs of running the Overmind prove so extraordinary, so intense, that even cloud computing isn't enough. Perhaps political stagnancy and economic resistance limits the rate at which it can implement change. Perhaps the Overmind itself calls for a limited rate of progress in order to prevent some catastrophic breakdown as a result of technological change coming too quickly. Yes, YOU may desire to jump into the deep end of full-dive VR and live out your virtual fantasies the moment it's turned on, but the Overmind disagrees— and it's smarter than you. YOU may want nanobots to cure your pancreatic cancer before you die, but the Overmind may figure that medical nanobots require years of real-world testing— things that work out perfectly in simulations may not pan out so quickly in real life (see: robotics and driverless cars, which were perfected in simulations years ago but still struggle IRL). YOU may demand your flying car, and maybe the Overmind can conjure some AV-controlling program that makes such a service easy, but that doesn't mean it can materialize a billion passenger drones in a few hours. Hell, we don't WANT it to do such a thing, because what if it mistypes "dec" for "bi" and accidentally dedicates itself to making a decillion passenger drones— turning the planet into molten rock in the process? There's a lot of potential bottlenecks for early superintelligence. For people who are eagerly awaiting for the Rapture of the Nerds to liberate them from the human condition, it's going to be AGONIZING and very possibly DISILLUSIONING. Like an Evangelical Christian during the Rapture where Jesus is just hanging out at the local bar and grill, barely bringing anyone to Heaven for years, allowing people to turn away from Him and be damned all the while He could so easily resolve everything at that moment.
Getting back to my thought experiments about my childhood, I thought of it as, "Imagine if I went down to New Orleans while, somewhere in a data center underneath Iceland or San Francisco, there's an artificial superintelligence thinking, creating theorems, and otherwise solving problems. What would change in my daily life? 'Oh, yeah, there's a superintelligent computer somewhere on Earth. I wonder what burger I'm going to have for lunch?'"
And a robot's not going to make that burger. The car I'm going to drive isn't going to be driving itself (probably). I'm not going to be able to look up and see a highway-in-the-sky of passenger drones or a hyperloop tube running beside the Causeway. It's going to look almost indistinguishable from what I saw two decades ago. Just because it will look ultra-futuristic two decades afterwards doesn't mean the universe changes instantly at that moment.
It's not a status quo that will exist forever. Indeed, I don't expect it to exist for long before the transformative disruption really gets going, but that sort of bizarro twilight stage of human existence looks increasingly likely.
I admit that I considered it likely to happen even last decade when I figured that the everyday common man would still "do things" even a century from now, but I'm referring to a specific period of time when literally EVERYONE is still "doing things."
By "doing things," I mean:
We'll all still be doing things when the first ASI emerges. It's going to be so soon that I no longer believe that the world will even be all that futuristic when it happens. Even as recently as last year, I figured that ASI would arise in a world that already looks Kurzweilian, one where transhumanism is already widespread and AGIs have been evolving for decades. But now I'm leaning strongly towards the hypothesis that the first AGI is going to be the first ASI (that first-generation artificial general intelligence will also be first-generation artificial superintelligence). It might not be godlike due to those aforementioned resource limitations, but just by way of not being limited to a human cranium, any general AI ought to become qualitatively superintelligent. Maybe not in a way that's necessarily humanlike or even sapient— as Starspawn0 once put it, there's a possibility that we could see the rise of a non-sapient brand of artificial superintelligence, one that could prove every theoretical mathematical theorem and synthesize any sort of media imaginable and unimaginable without ever actually being a "person." I think this is possible, even probable. Through its intelligence, it could figure out how to best disseminate its technologies throughout all of society as quickly as possible. And yet even then, it would not change things overnight. There still needs to be time. And because of that, the first days of the Future™ will be disgustingly mundane.We still get up to turn off the light, drive down gravel roads hoping not to get stuck in the mud, get our own sodas out the fridge, use WordArt in our documents, write down notes on paper, light candles when the electricity goes out, get scared of unknown things that lurk in the dark, rub our dogs and tease our cats, search thrift shops for obscure music, turn in half-assed homework to the teacher, jiggle door handles when we lock ourselves out, use cheap wires to hold in cattle, get in shaky elevators in skyscrapers, open and close blinds, adjust the air until it's at least not uncomfortable, give our health into the hands of fate when we come down with illnesses, wait for TV shows to come on if we don't have good internet connections, drink energy shots for extra focus, scratch off faded stickers from old products, get splinters from wood, step on trash in parks, watch grime build up on the outside of city buildings, scavenge through supermarket clothing sections like our ancestors in the bushes, exercise while listening to our smartphones, and generally try our best while putting everything off.
Once again, I lied about this being a short thread.