Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The AGI Fallacy | A potential phenomenon where we assume new tech will either only come about with AGI or is unlikely without it.

AGI AI deep learning ANI AXI artificial neural networks fallacy thought experiment media synthesis automation

  • Please log in to reply
8 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

Reposted from here: https://www.reddit.c...enon_where_we/?

 

We're probably familiar with the AI Effect, yes? The gist there is that we assume that a technology, accomplishment, or innovative idea [X] requires "true" artificial intelligence [Y], but once we actually accomplish [X] with [Y], [Y] is no longer [Y]. That might sound esoteric on the surface, but it's simple: once we do something new with AI, it's no longer called "AI". It's just a classifier, a tree search, a statistical gradient, a Boolean loop, an expert system, or something of that sort.

As a result, I've started translating "NAI" (narrow AI) as "Not AI" because that's what just about any and every narrow AI system is going to be.

It's possible there's a similar issue building with a fallacy that's closely related to (but is not quite) the AI Effect.
To explain my hypothesis: take [X] again. It's a Super Task that requires skills far beyond any ANI system today. In order to reliably accomplish [X], we need [Y]— artificial general intelligence. But here's the rub: most experts place the ETA of AGI at around 2045 at the earliest, with actual data scientists leaning much closer to the 2060s at the earliest, with more conservative estimates placing its creation into the 22nd century. [Z] is how many years away this is, and for simplicity's sake, let's presume that [Z] = 50 years.

To simplify: [X] requires [Y], but [Y] is [Z] years away. Therefore, [X] must also be [Z] years away, or at least it's close to it and accomplishing it heralds [Y].

But this isn't the case for almost everything done with AI thus far. As it turns out, a sufficiently advanced narrow AI system was capable of doing things that past researchers were doggedly sure could only be done with general AI. Chess, for example: it was once assumed that since only the highest intellectual minds could master chess, an AI would need to be virtually alive to do the same.

Garry Kasparov was trumped in the mid-90s, and I have a distinct inkling that the Singularity might not have begun back then too, but I need more evidence to confirm this.

Things like Watson defeating a human on Jeopardy or an AI crushing humans at Go were in similar places in complexity: if an AI could do that, we're probably close to the Singularity. Both happened earlier this decade, and no human-level AGI has assumed total control over our nuclear launch codes since. If they have, they're doing a damn fine job not telling us, but there may be a tiny chance that AGI doesn't exist and these tasks were perfectly accomplishable by non-AGI due to focusing on specialization of certain tasks— which has the added corollary of saying that many aspects of human cognition we assume can only be mimicked by a full-fledged mind can indeed be reduced to a much simpler and narrower form.
Of course, there genuinely are some tasks that require AI more complicated than ANI. Autonomous cars are one. Sure, their narrow goal is "driving", but it turns out that's actually a very general goal when you really think about it because you have to account for, predict, and react to so many different stimuli at one time. Therefore, autonomous cars are only happening when we have AGI, right?

Well...

So, for the past few years, I've been trying to get people to listen to my explanation that our model for AI types has a gaping hole in it. We only have three types as of the present: ANI or NAI (narrow AI/not AI that can only do one thing), AGI (general AI, which can do everything), and ASI (artificial superintelligence, which can do everything and then some at a bizarro superhuman level). But ever since roughly around 2015 or so, I started asking myself: "what about AI that can do some things but not everything?" That is, it might be specialized for one specific class of tasks, but it can do many or all of the subtasks within that class. Or, perhaps more simply, it's generalized across a cluster of tasks and capabilities but isn't general AI. It seems so obvious to me that this is the next step in AI, and we even have networks that do this: transformers, for example, specialize in natural-language generation, but from text synthesis you can also do rudimentary images or organize MIDI files; even with just pure text synthesis, you can generate anything from poems to scripts and everything in between. Normally, you'd need an ANI that specialize for each one of those tasks, and it's true that most transformers right now are trained to do one specifically. But as long as they generate character data, they can theoretically generate more than just words.

This isn't "proto-AGI" or anything close; if anything, it's closer to ANI. But it isn't ANI; it's too generalized to be ANI.
The gist there is that this proves to me that it's possible for AI to do narrowly-generalized tasks and, thus, be far stronger than any narrow AI network that exists today even if it's still weaker than any theoretical future AGI. This is a bridge from here to there, and we've all but started crossing it in the past couple of years.

The term I've coined for that kind of AI is "AXI" or "artificial expert intelligence" (not to be confused with expert systems). It makes sense in theory: an expert is one who specializes in a particular field rather than a worker who does one singular task or a polymath who knows everything. It's certainly better than "proto-AGI" because many will latch onto the AGI part of the name to discredit these sorts of technologies, and even then it really isn't proto-AGI anyway.

This has some implications for this "AGI Fallacy", if I may be able to coin another term. If we believe things like synthesizing a 5-minute photorealistic video with audio requires AGI, then we can comfortably say that this is 50+ years away and not have to worry about it. But if a suitably strong AXI does it in only five years, then we may have a problem: by assuming that [X] is 50 years away, we compartmentalize it in the same place as things like our grandchildren going to college, distant future effects of climate change or astronomic events, science fiction, and our own deaths. This is fairly low on our internal list of concerns. If it's only five years away, it becomes a much more immediate concern and we're more apt to do something about it or at least think through how we might deal with it.
This is why there's little being done about climate change: even some of the most dire predictions still place the start of the worst effects decades in the future, which reduces our own responsibility to do or care about anything, despite the fact certain effects could start much sooner by unforeseen events.

It can be used to justify skepticism of any sort of change, too. The AGI Fallacy explains why people tend to think automation is decades away. For one, we tend to think of automation as "humanoid robots taking jobs from men in blue overalls and hardhats, burger flippers, and truck drivers" and because humanoid robots are still rather pathetic (despite the fact they can backflip and freerun now), we can comfortably say "jobs aren't going away any time soon."

I mean, for one: media synthesis is a thing, and the basic principle there is that disembodied neural networks can automate any data-oriented task (including the entertainment industry, news industry, and many white collar office tasks) as long as it has enough power, and that might start hitting as soon as the 2020s. Of course, there are also predictions there that say that "we need AGI to get an NLG program to write a novel" or "we need AGI to generate a 5-minute animation," and yet both tasks seem like they may be accomplished within just a few, zero-AGI-filled years. And autonomous robots don't need to be entirely generalized to be better than humans; they just need to be able to handle unexpected issues. If you have strong enough ANI and AXI to handle vision, spatial awareness, and prediction, you could conceivably get a general-purpose robot worker. And this might only take 10 to 15 years as opposed to 50+.

Sure, the robot with AGI is going to be better than the one with a bunch of less generalized networks, but it's not like we can only make these robots with AGI in the first place. And I think we're going to see what I mean in very short order.
I think autonomous trucks, for example, can be done with sufficiently powerful ANI. If not ANI, then certainly AXI.
The cold fact is that most of the faculties of our mind can be reduced to certain algorithms; the issue is and has always been replicating the whole mind. And I'm merely saying that, in order to get to our sci-fi future, we don't actually need to do that (though it would greatly help).

TLDR: There's a fallacy where we assume that we need AGI to do something, when a sufficiently advanced narrow AI (or narrowly-generalized AI) will do that same thing much sooner. Since we don't expect it to be done so soon, we don't prepare for it properly.
If I'm wrong, please correct me.


  • Zaphod, SkyHize and starspawn0 like this

And remember my friend, future events such as these will affect you in the future.


#2
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPip
  • 92 posts
it seems to me that numerous AXIs will just keep piling up and then AGIs will just rise out from stacks of inhuman experts that were unwittingly mixed and matched together by curious humans.

it also seems impossible to build then train an oracle AI to make predictions on what tech could be built based on what tech we have now, for us to have some warnings beforehand.

but then it "only seems impossible until it's done" is the idea of this thread so... damn. you're melting my mind.

so how do we properly prepare for it?

As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#3
Singularity Kills

Singularity Kills

    Member

  • Members
  • PipPip
  • 27 posts

I always thought an AGI would be a tier capable of controlling tens of thousands of ANIs to achieve its goals so at the stage we’re in we’re making the tools for an AGI to be able to do what it needs to.



#4
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

it seems to me that numerous AXIs will just keep piling up and then AGIs will just rise out from stacks of inhuman experts that were unwittingly mixed and matched together by curious humans.

I actually considered that ahead of time, and I can't say I'm sure this is how we'll get to AGI. Indeed, once I thought about it, I realized that this might even be a means to arrive at Zombie AGI. I didn't even realize it was the same concept until I asked myself, "would this bundle of AXIs understand anything or would they just be a philosophical zombie that only appears on the surface to be an AGI?"  If bundling a bunch of narrowly-generalized networks could get us to AGI, then surely bundling thousands of purely narrow networks could also get us there, but we've seen nothing like that. 

The closest thing to AGI nowadays are corporations, Mechanical Turk-like task aggregators, and internet guilds/communities (e.g. Reddit, 4chan, etc.), and I think using humans as operators ought to immediately and utterly disqualify them from being called "AGI." 

After all, why isn't Alexa considered to be an AGI? There are loads of networks at work in it, and you can download many more apps for it to use that are sort of like plugging in new programs and networks into it to enhance its capabilities. Why isn't Alexa at least a proto-AGI? Capability is a big factor, and just because it's a working bundle of multiple programs for a more common cause doesn't give it general learning capabilities. A single network might get better with some tweaks or after learning your name and habits, but that doesn't cause an increase of any sort of established intelligence. 

 

AXI as a concept/very rudimentary phylum of data science might operate on slightly different principles, but I see no reason why even slightly more generalized & volumetric networks might lead to anything like true AGI. I can definitely see a hypothetical future bundle of networks, of interconnected AXI and ANI of incredibly strength (by modern standards) able to convince some people that it understands concepts and that it is self-aware, but if there were some magical means of detecting sapience, this detector would rank the AI as a "negative", as alive as an electrified rock. But that also brings up questions of what "understanding" really is. The reason why I understand what a cat is is because I've experienced cats many times, using all my senses to remember what they are from what they look like to what words describe them. If a sufficiently advanced natural-language generator trained on 1 petabyte of media on literally everything there is to know about a household feline, would it have the same understanding? Well, actually, no. If it's only ever trained on that 1 petabyte of media, it'll know everything about cats and whatever else was in that media in large enough numbers, but I have quite a few more "tools" in my neurological repertoire that give me an enormous edge over it, such as unrelated life experiences that I can still conceivably relate to cats as well as actual sensory experiences of cats that physically can't be replicated in pure media (though data from tactile sensors might get around this). 

 

This and more are some issues that make it difficult for me to believe that a bundled network will be anything more than zombie AGI. But for most people, that's good enough. If I had a zombie AGI to use right now, I personally wouldn't care that it's not the "real" thing as long as it can accomplish the same tasks I need it to.


  • Hyndal_Halcyon likes this

And remember my friend, future events such as these will affect you in the future.


#5
tomasth

tomasth

    Member

  • Members
  • PipPipPipPipPip
  • 243 posts

What about people that don't share life experiences or have a cat ? They are just like that system ? And that cat do they understand cats ? (but not Cephalopods ?)

 

What animal other then humans (all of them ?) counts as AGI or at least not merely a ANI ?



#6
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,771 posts
  • LocationNew Orleans, LA

What about people that don't share life experiences or have a cat ? They are just like that system ? 

Not quite because they can at least conceptualize what a cat might be like by using other experiences. I said it involving media synthesis elsewhere:

 

 

To reduce it to its most fundamental ingredients: Imagination = experience + abstraction + prediction. To get creativity, you need only add “drive”. Presuming that we fail to create artificial general intelligence in the next ten years (an easy thing to assume because it’s unlikely we will achieve fully generalized AI even in the next thirty), we still possess computers capable of the former three ingredients.
Someone who lives on a flat island and who has never seen a mountain before can learn to picture what one might be by using what they know of rocks and cumulonimbus clouds, making an abstract guess to cross the two, and then predicting what such a “rock cloud” might look like. This is the root of imagination.

 

 

The same principles apply here. You might never have even heard of the word "cat" in your life before, but with enough analogies and descriptions based on things you have experienced (and you're experiencing new things at every waking and sleeping moment of your life), you can at least get a rough understanding. But if you still never see a cat in your life going forward, it will still be hard to truly understand what a cat specifically is— but its fundamental components (a furry tailed mammal that makes a high-pitched noise) is something you have experienced before and can, thus, understand on a deep level. Even if you've never touched something that's furry, you've probably felt your own hair or something that's soft. If you've never heard a meow or mew, you've probably heard someone or something make a similar sound (or, you could take a largely unrelated sound and imagine "pitching it up" or some other effect).

 

You'd have to be deaf, blind, mute, paralyzed, with total somatosensory loss, and no other working senses in order to lack understanding of things.


And remember my friend, future events such as these will affect you in the future.


#7
Casey

Casey

    Member

  • Members
  • PipPipPipPipPipPip
  • 679 posts

Not to sound like a stereotypical starry-eyed Singultarian or anything, but I think the estimates from data scientists that AGI won't exist until the 2060s are way too conservative. I don't want to lean on the "but exponential progress!!!" crutch too much, but I think that assuming human level intelligence will take a bare minimum of 40-something years to achieve requires you to believe that progress will indefinitely remain at 2010s levels. (Actually, I wouldn't even go that far given the regularity of achievements and new milestones reached, but...)

 

With few exceptions, I think the 2060s are too far out there to make accurate predictions about, especially for a field that moves as quickly as AI. I don't think anyone really has any idea whether 2040s technology will be able to achieve the goal by 2050, or whether 2030s AI tech can do so by 2040. 

 

For my part, I think that - if not literal AGI - Artificial Intelligence so advanced that it doesn't particularly matter whether it qualifies as human-intelligence in every single way is a 2020s possibility and a 2030s probability. I don't think it will take until the 2060s unless every AI scientist and researcher in the world decides to take a break for about 20 years.



#8
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,319 posts
Part of the problem with understanding predictions is knowing exactly what the person means by a term like AGI. If one is asking whether a machine would pass a Turing Test fairly, say after a 30 minute or hour-long conversation, that's a very different thing from imitating the full capabilities of biological intelligence. Biological intelligence includes things like the ability to smoothly transition between short-term and long-term memory; for example, skill-acquisition requires this. When you learn a new skill, you start out being shaky and unconfident and get things wrong; eventually, it gets baked-in to your long-term procedural knowledge. Another thing not required to pass a Turing test, that we expect biological intelligences to do, is the ability to coordinate muscles and move a body like a human does in the real world.

Some people, when they hear "AGI", think you mean to include all these different things. However, passing a Turing Test and radically transforming the world by automating most jobs away only requires the ability to train the machine to acquire a fair number of skills, and then to smoothly improvise by combining them together.

Actually, there are a few people I've noticed on Twitter who are a little upset that machines can be trained well enough to where you can't tell what their true intelligence is. So, they are creating new tests to try to get at these core capabilities of intelligence. I suspect that this program will quickly degenerate to become a subset of the field of Computational Complexity, and that at the end of that rainbow they will discover that "brute force" is almost the only thing you can do -- and that that is what biological intelligence does. There isn't any secret sauce.
  • Casey, SkyHize and Alislaws like this

#9
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,050 posts
  • LocationLondon

 

"There isn't any secret sauce".

Should be a T-Shirt* or a title for a book. 

 

*(for the futurists,  AI researchers and neuroscientists who hold this view, it would likely  be pretty niche. Def not a best selling t-shirt) 







Also tagged with one or more of these keywords: AGI, AI, deep learning, ANI, AXI, artificial neural networks, fallacy, thought experiment, media synthesis, automation

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users