Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Types of Artificial Intelligence: Redux

artificial intelligence machine learning narrow AI general AI expert AI deep learning Singularity artificial superintelligence AI DeepMind

  • Please log in to reply
14 replies to this topic

#1
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

Artificial Intelligence: A Summary of Strength and Architecture

Hitherto the present, there has been a post floating around the internet detailing multiple “types” of artificial intelligence, purportedly written by someone named “Yuli Ban”. If you see this post, know that it wasn’t written by me at all, absolutely not, I take no responsibility for the cringy contents of that post, and you are likely remembering something that never existed or perhaps was written by my evil twin, Tali.

In all seriousness, I’ve been meaning to update that post for a while now thanks to some greater understanding of how AI works. I recall mentioning how it was a smorgasbord of buzzwords without much meaning, written by someone in 2016 with no experience in AI whatsoever. This one, I hope, provides greater usefulness.

Types of Artificial Intelligence: Redux

Artificial intelligence has a problem: no one can precisely tell you what it is supposed to be. This makes discussing its future difficult. Current machine learning methods are impressive by the standards of what has come before, and certainly we can give various systems and networks enough power to rival and exceed human capabilities in narrow areas. The contention is whether or not these networks qualify as “artificial intelligence”.

My personal definition of artificial intelligence is a controversial one, as I am privy towards lumping even basic software calculations under the umbrella of “AI”. This is because there are essentially two separate kinds of “artificial intelligence”— there is the field of artificial intelligence research, which is a branch of computer science, and there is the popular connotations of artificial intelligence. AI is popularly known as being “computers with brains of varying intelligence”.

Business rule management systems are not commonly considered to be “artificial intelligence” in the popular imagination. Indeed, the very name conjures a sort of gunmetal-boring corporate software model. Yet BRMS software is one of the most widely commercialized forms of AI, dating back to the late ’80s.

If we limit all AI to “computers capable of creative thinking”, even many classic sci-fi depictions of AI would not qualify. Yet if my terrifying and anarchist definition became dominant, then we would have to presume that the Sumerians created the first AIs when they invented abacuses.

This is one reason why the original post on the various types of AI doesn’t work. But there are plenty more.

 

Another bottleneck in understanding the future of AI research is our limited imaginings of artificial general intelligence— a feat of engineering considered equal to the creation of practical nuclear fusion, room-temperature superconductors, and metallic hydrogen. As with all of these, the possibilities are much wider than we initially conceived. Yet it’s with AGI that I feel there is a great deal of hype and misunderstanding that would be more easily turned to practical breakthroughs if there were a shift in how we perceived it.

I, for one, always found it odd that we equate “artificial general intelligence” with “human-level AI” despite the fact that every animal lifeform possesses general intelligence— yet no one seriously claims that nematodes and insects are our intellectual rival.

“Surely,” I said as far back as 2012, “there has to be something that comes before human-level AI but is still well past what we have now.”

A related issue is that we compare and contrast narrow AI software with imagined general AI many decades henceward, allowing ourselves nothing to bridge the gap. There is no ‘intermediate’ AI.

All AI is either narrow and weak or general and strong. We have no popular ideas for “narrow and strong” AI despite the fact we have developed a multitude of narrow networks that have far surpassed human capabilities. We also have no popular ideas for “general and weak” AI, which is to say an AI that is capable of generalized learning but is not as intelligent as a human. This could be for a motley variety of factors, many of them coming down to basic neuroscience— for example, something that learns on a generalized level may still lack agency.

So here is a basic rundown on my revised “map” of AI, which has three degrees: ArchitectureStrength, and Ability.

 

Architecture

Architecture is defined by an AI’s structural learning capacity and is already known by the terms “narrow AI” and “general AI“. Narrow AI describes software that is designed for one task. General AI describes software that can learn to accomplish a wide variety of tasks. Of course, this is usually synonymous with software that can learn to accomplish any task at the same level as a human being, which I’ll explain later why that isn’t necessarily always the case.

I wish to add one more category: “expert AI“. Expert artificial intelligence, or artificial expert intelligence (XAI or AXI) describes artificial intelligences that are generalized across a certain area but are not fully generalized, as I’ll explain in greater detail below. You may see it as “less-narrow AI”, with computers capable of learning a variety of like narrow tasks. AXI is very likely the next major step in AI research over the next five to ten years.

 

Mechanical Calculations: These are calculators and traditional computer software. They only do calculations. Addition, subtraction, multiplication, division, etc. There is no intelligence involved. Mechanical calculations can be considered the ‘DNA’ of AI, the root by which we are able to construct intelligences but by itself is not a form of AI. As aforementioned, this level starts with ancient abacuses.

Artificial Narrow Intelligence: Artificial narrow intelligence (ANI), colloquially referred to as “weak AI”, refers to software that is capable of accomplishing one singular task, whether that be through hard coding or soft learning. This describes almost all AI that currently exists, and is also possibly the most consistently underestimated technology of the past 100 years. Just about any AI you can think of, from Siri down to motion sensors, qualify as ANI. Once you program an ANI to do a certain task, it is locked into that task. Just as you cannot make a clock play music unless you reformat its gears for that purpose, you must reprogram an ANI if you want it to do something it was not programmed to do. This includes narrow machine learning networks that are limited to cohesive parameters. Machine learning involves using statistical techniques to refine an agent’s performance, and while this can be generalized for much more interesting uses, it is not magical and is natively a narrow field of AI.

Artificial Expert Intelligence: Artificial expert intelligence (AXI), sometimes referred to as “less-narrow AI”, refers to software that is capable of accomplishing multiple tasks in a relatively narrow field. This type of AI is new, having become possible only in the past five years due to parallel computing and deep neural networks. The best example is DeepMind’s AlphaZero, which utilized a general-purpose reinforcement learning algorithm to conquer three separate board games— chess, go, and shogi. Normally, you would require three separate networks, one for each game, but with AXI, you are able to play a wider variety of games with a single network. Thus, it is more generalized than any ANI. However, AlphaZero is not capable of playing any game. It also likely would not function if pressed to do something unrelated to game playing, such as baking a cake or business analysis. This is why it is its own category of artificial intelligence— too general for narrow AI, but too narrow for general AI. It is more akin to an expert in a particular field, knowledgeable across multiple domains without being a polymath. This is the next step of machine learning, the point at which transfer learning and deep reinforcement learning allow for computers to understand certain things without needing to be mechanically fed rules and capable of expanding its own hyperparameters.

Artificial General Intelligence: Artificial general intelligence (AGI), sometimes referred to as “strong AI”, refers to software capable of accomplishing any task, or at least any task accomplishable by biological intelligence. Currently, there are no AGI networks on Earth and we have no idea when we’ll create the first truly general-purpose artificial intelligence. However, AGI is a much greater qualitative improvement over AXI than AXI is over ANI— whereas AXI is multi-purpose, AGI is omni-purpose. Theoretically, a sufficiently advanced AGI is indistinguishable from a healthy adult human— and even this represents the lower end of the true capabilities of strong artificial intelligence.

 

Strength

Strength in AI is defined by an AI’s intellectual capacity compared to humans.

Weak Artificial Intelligence is any AI that is intellectually less capable than humans but is colloquially used to describe all narrow AI.

Strong Artificial Intelligence is any AI that is intellectually as capable or more capable than humans but is colloquially used to describe all general AI.

Because of colloquial usage, “weak” and “narrow” are interchangeable terms. Likewise, “strong” and “general” are used to mean the same thing. However, as AI progresses and increasingly capable computers leave the realm of science fiction and enter reality, we are discovering that there is a spectrum of strength even within AI architectures.

For example: we used to claim that only human-level general intelligence will be capable of defeating humans at Chess. Yet DeepBlue accomplished the task over twenty years ago and no one seriously claims that we are being ruled over by superintelligent machine overlords. People said only strong AI could beat humans at for Go, as well as for interpersonal game shows like Jeopardy. Yet “weak” narrow AIs were able to trounce humans in all these tasks and general AI is still nowhere in sight.

My belief is that nearly any task we can conceive can be accomplished by a sufficiently strong narrow intelligence, but since we conflate strong AI with general AI, we consistently blind ourselves to this truth. That’s why I’ve decided to decouple strength from architecture.

Weak Narrow Artificial Intelligence: Weak Narrow AI (WNAI) describes software that is subhuman or approaching par-human in strength in one narrow task. Most smart speakers/digital assistants like Amazon Echo and Siri occupy this stratum as they do not possess any area of ‘smarts’ that is equal to that of humans, though their speech recognition abilities does lead to us psychologically imbuing them with more intelligence than they actually possess. These are merely the most visible WNAI— most AI in the world is in this category by nature and this will always be the truth, as there is only so much intelligence you need to accomplish certain tasks. As I mentioned in the original post, you don’t need artificial superintelligence to run task manager or an industrial robot. Doing so would be like trying to light a campfire with Tsar Bomba. Interestingly, this is a lesson a lot of sci-fi overlooks due to the belief that all software in the distant future will become superintelligent, no matter how inefficient it may be.

Strong Narrow Artificial Intelligence: Strong Narrow AI (SNAI) describes software that is par-human or superhuman in strength in one narrow task. In my original post, I made the grievously idiotic mistake of conflating ‘public’ AI with SNAI, despite the fact that SNAI have essentially been around since the early 1950s— even a program that can defeat humans more than 50% of the time at tic-tac-toe can be considered a “strong narrow AI”. This is one reason why the term likely never went anywhere, as our popular idea of any strong AI requires worlds more intelligence than a tic-tac-toe lord. But strength is subjective when it comes to narrow AI. What’s strong for plastic may be incredibly weak for steel. What’s usefully strong for glass is likely far too brittle for brick. This is still true for narrow AI. Right now, SNAIs are more popularly represented by game-mastering software such as AlphaGo and IBM Watson because they require some level of proto-cognition and somewhat recognizable intellectual capability that is utterly alien compared to the likes of Bertie the Brain.

Weak Expert Artificial Intelligence: Weak expert AI (WXAI) describes software that is subhuman or approaching par-human in strength in a field of tasks. Due to expert AI still being a novel development as of the time of writing, we don’t have many examples, and ironically one of few examples we have is actually strong expert AI.  However, I can imagine WXAI as being similar to what Google DeepMind and OpenAI are currently working on with their Atari-playing programs. DeepMind in particular uses one generalized network to play a wide variety of games, as aforementioned. And while many of them have reached par-human and superhuman levels of playing, so far we have not received any word that this algorithm has achieved par-human across all Atari games. This would make it closer to approaching par-human strength. This becomes even more noticeable when taking into consideration that this network’s play experience likely has not been transferred to games from more advanced consoles such as the NES and SNES.

Strong Expert Artificial Intelligence: Strong expert AI (SXAI) describes software that is par-human or superhuman in strength in a field of tasks. Currently, the best (and probably only) known example is DeepMind’s AlphaZero network. To a layman, an SXAI will likely seem indistinguishable from an AGI, though there will still be obvious parameters it cannot act beyond. This is also likely going to be a very peculiar and frightening place for AI research, an era where AIs will begin to seem too competent to control despite their actual limitations. One major consideration is that since SXAI will have capabilities beyond one narrow field, it can't be considered "strong" if it's only competent in a single field. I would reckon that if it's parhuman in 30% of all capabilities, it qualifies as SXAI.

Weak General Artificial Intelligence: Weak general AI (WGAI) describes software that is subhuman or approaching par-human in strength in general, perhaps with a stronger ability in a particular narrow field but otherwise not as strong as the human brain. Oddly enough, I’ve very rarely heard of the possibility of WGAI. If anything, it’s usually believed that the moment we create a general AI, it will rapidly evolve into a superintelligence. However, WGAI is very likely going to be a much longer-lived phenomenon than currently believed due to computational limits. WGAI is not nearly as magical as SGAI or ASI— should the OpenWorm project bear fruit, the result would be a general AI. The only difference being that it would prove to be an extraordinarily weak general AI, which gives this term a purpose. Most robots used for automation will likely lie in this category, if not SXAI, since most tasks merely require environmental understanding and some level of creative reactivity rather than higher order sapience.

Strong General Artificial Intelligence: Strong general AI (SGAI) describes software that is par-human or superhuman in strength across all general tasks. This is sci-fi AI, agents of such incredible intellectual power that they rival our own minds. When people ask of when “true AI” will be created, they typically mean this.

Artificial Superintelligence: Artificial superintelligence (ASI) describes a certain kind of strong general artificial intelligence, one that is so far beyond the capabilities of the human brain as to be virtually godlike. The point at which SGAI becomes ASI is a bit fuzzy, as we tend to think of the two much of the same way we think of the difference between stellar-mass and supermassive black holes. My hypothesis is that SGAI can still be considered superhuman and not break beyond theoretical human capabilities— the point at which SGAI becomes ASI is the exact point at which a computer surpasses all theoretical human capabilities. If you took our intelligence and brought it to as many standard deviations down the curve as genetically possible, you will eventually come across some limit. Biological brains are electrochemical in nature, and the fastest brain signal travels at around 270 miles per second. There is, in theory, a maximum human intelligence. ASI is anything beyond that point. All the heavens lie above us.

 

Ability

Ability in AI is defined by an AI’s cognitive capabilities, ranging from complete lack of self-awareness all the way to sapience. I did not create this list, but I find it to be extremely useful towards understanding the future development of artificial intelligence.

Reactive: AI that only reacts. It doesn’t remember anything; it only experiences what exists and reacts. Example: Deep Blue.

Limited Memory: This involves AI that can recall information outside of the immediate moment. Right now, this is more or less the domain of chatbots and autonomous vehicles.

Theory of Mind: This is AI that can understand the concept that there are other entities than itself, entities that can affect its own actions.

Sapience: This is AI that can understand the concept that it is an individual separate from other things, that it has a body and that if something happens to this body, its own mind may be affected. By extension, it understands that it has its own mind. In other words, it possesses self-awareness. It is capable of reflecting on its sentience and self-awareness and can draw intelligent conclusions using this knowledge. It possesses the agency to ask why it exists. At which point it is essentially conscious.


  • zEVerzan, Hyndal_Halcyon, Alislaws and 2 others like this

And remember my friend, future events such as these will affect you in the future.


#2
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPipPip
  • 556 posts
  • LocationNot in Kansas anymore

In the Forum theme i'm in, your black text is on a dark grey background, and I can't read it.  I changed the theme to IP.Board which has a white background, but others might have issues with reading it.


John Lennon dares you to make sense of this

Spoiler

#3
Jakob

Jakob

    Stable Genius

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,122 posts

In the Forum theme i'm in, your black text is on a dark grey background, and I can't read it.  I changed the theme to IP.Board which has a white background, but others might have issues with reading it.

No kidding, I thought it was completely blank and he was just using big font and newlines for emphasis.


  • rennerpetey likes this

#4
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

My plan to utterly confuse the forum is now complete!

 

No, actually for some reason the formatting was wonky and I conked out before I had a chance to fix it. Should be better now.


  • rennerpetey likes this

And remember my friend, future events such as these will affect you in the future.


#5
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

No comment?


And remember my friend, future events such as these will affect you in the future.


#6
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPip
  • 90 posts

No comment?

As much as I love reading developments on AI, I think IA also deserves some recognition and so far, the neural lace project is the only real thing I've come across. Classifying every form of automation is possible now and its thanks to you sir. But I'm still intrigued about IA. Can't seem to find threads about it here though.


  • Yuli Ban likes this

As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#7
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

^ I'll be on it soon!


And remember my friend, future events such as these will affect you in the future.


#8
rennerpetey

rennerpetey

    Fighting Corporations since 2020

  • Members
  • PipPipPipPipPipPip
  • 556 posts
  • LocationNot in Kansas anymore

No comment?

You are so thorough and straight forward that there is nothing to argue with or question.  I just admire you and your ability to write so extensively on this topic.


  • Yuli Ban, BasilBerylium and Alislaws like this

John Lennon dares you to make sense of this

Spoiler

#9
sophiewilson0191

sophiewilson0191

    New Member

  • Members
  • Pip
  • 2 posts

Great article.

AI is not perfected yet but we are starting to improve.



#10
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

I'm wondering if I should really include AlphaZero as an example of a "strong expert AI" since it's only strong in three tasks. I suppose if the strength is defining whether or not it's parhuman/superhuman in all of the tasks in which it's capable, it qualifies. Same way checkers playing AI from the 1970s were already very strong narrow AI.


  • Casey likes this

And remember my friend, future events such as these will affect you in the future.


#11
Alpha Centari

Alpha Centari

    Member

  • Members
  • PipPipPip
  • 75 posts
Very Nice. Though I would like to state something. Sentience is characterized to a being or entity that is aware of its own existence and can perceive the reality in which it exists and is capable of reacting to the environment it is currently in. Sapience (a term I haven't heard in a while) is as stated when a being or entity is capable of not only being aware of its own existence and counciousness but the existence and counciousness of other individual beings and/or entities. By these definitions alone leave me to theorize that sentience and sapience are both interrelated. In other words you can't be sentient and not sapient and vice-versa. Additionally, I wonder if they're too levels of sentience/sapience much like they're levels of AI. I'm intrigued by this because when we think about ANI (Artificial Narrow Intelligence), a machine or robot with such level intelligence can interact in its environment and express sensation or according action to whatever it just experienced or caused it to do such action going along with the machines program or algorithm; but may not actually be capable of percieving or comprehending what exactly is happening around or within the machine and why it is doing what it is doing and why others are doing what they are doing. The only thing a machine of such Intelligence is capable of perceiving is its program itself as interpretated through something say like binary code or nothing at all if the language is an interpreted one (Python). With this in mind, I wonder if sentience/sapience is something that comes naturally/automatically upon reaching a certain level of intelligence and how sentient a being or entity is is dependent on how highly intelligent it is
  • Yuli Ban likes this

#12
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPip
  • 90 posts

Very Nice. Though I would like to state something. Sentience is characterized to a being or entity that is aware of its own existence and can perceive the reality in which it exists and is capable of reacting to the environment it is currently in. Sapience (a term I haven't heard in a while) is as stated when a being or entity is capable of not only being aware of its own existence and counciousness but the existence and counciousness of other individual beings and/or entities. By these definitions alone leave me to theorize that sentience and sapience are both interrelated. In other words you can't be sentient and not sapient and vice-versa. Additionally, I wonder if they're too levels of sentience/sapience much like they're levels of AI. I'm intrigued by this because when we think about ANI (Artificial Narrow Intelligence), a machine or robot with such level intelligence can interact in its environment and express sensation or according action to whatever it just experienced or caused it to do such action going along with the machines program or algorithm; but may not actually be capable of percieving or comprehending what exactly is happening around or within the machine and why it is doing what it is doing and why others are doing what they are doing. The only thing a machine of such Intelligence is capable of perceiving is its program itself as interpretated through something say like binary code or nothing at all if the language is an interpreted one (Python). With this in mind, I wonder if sentience/sapience is something that comes naturally/automatically upon reaching a certain level of intelligence and how sentient a being or entity is is dependent on how highly intelligent it is

Yes! HAHA xD It's like the ladder of self-awareness comes in levels, like a game. I'd gladly appreciate if someone can point out where among these classifications will the S-words (sapience/sentience/sophonce) rise and fall, or what might the requirements be to qualify for possessing S-words.


As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#13
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPipPip
  • 90 posts

Also, this. Another wonderful worldbuild where space elves are galactic politicians with a very critical approach to the social matters of several transhuman futures.

 

The link I just found there outlines the differences of the S-words. Handy stuff.


As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#14
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

"Sapience," noun of sapient, is the ability to think, and to reason. It may not seem like much a difference, but the ability to reason is tied more closely to sapience than to sentience. Most animals are sentient, (yes, you can correctly say your dog is sentient!) but only humans are sapient.

http://www.rebekkahn...-sentience.html
(We realize now that dolphins, elephants, orangutans, and bonobos are also likely sapient).
 

 

Sentience means the ability to feel things, the ability to perceive things. Any living thing that has some degree of consciousness is sentient, including insects, lizards, dogs, dolphins and human beings. The word sentience is derived from the Latin word sentientem, which means feeling. The adjective form is sentient. The word sentience is often misused to mean a creature that thinks.
 
Sapience means the ability to think, the capacity for intelligence, the ability to acquire wisdom. The scientific name for modern man is Homo sapiens. Sapience only describes a living thing that is able to think. The word sapience is derived from the Latin word sapientia, which means intelligence or discernment. The adjective form is sapient. Note that sentience is often misused in place of the word sapience.

http://grammarist.co...ce-vs-sapience/

You see, even plants are sentient. But if you rely purely on instincts or preprogrammed responses, you lack sapience. Sapience is when you can actually reflect on sentience.


  • Hyndal_Halcyon likes this

And remember my friend, future events such as these will affect you in the future.


#15
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 20,624 posts
  • LocationNew Orleans, LA

Okay, a few things:

 

1: I fused "Sentience" and "Sapience" since there was really no difference between the two and the emergence of sentience is already a pretty messy thing that likely arises somewhere around Limited Memory and Theory of Mind. Sentience, as explained above, is merely being aware of things. A human is born sentient, though it is not sapient until a few years of age. 

 

2: I'm going to add "Function". There isn't much I can do to delve in-depth into this topic since there are myriad functions of AI. But I feel that discerning the difference between function and purpose is a big deal, otherwise people would say that Siri is technically an expert AI because it can both talk to you and add reminders to your calendar and play music, etc. when an expert AI is a bit more esoteric than that.

 

3: I'm going to add a defining limit for strength in expert AI. AlphaZero qualifies as strong AXI even though it is only superhuman in three games precisely because it is superhuman in all of its capabilities. I think that if an AXI is at least parhuman in 30% of its capabilities, it qualifies as strong AXI.


And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: artificial intelligence, machine learning, narrow AI, general AI, expert AI, deep learning, Singularity, artificial superintelligence, AI, DeepMind

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users