Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum


  • Please log in to reply
14 replies to this topic

#1
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

Types of Artificial Intelligence

 
Let’s talk about AI. I’ve decided to use the terms ‘narrow and general’ and ‘weak and strong’ as modifiers in and of themselves. Normally, weak AI is the same thing as narrow AI; strong AI is the same thing as general AI. But I mentioned elsewhere on the Internet that there certainly must be such a thing as ‘less-narrow AI.’ AI that’s more general than the likes of, say, Siri, but not quite as strong as the likes of HAL-9000.
So my system is this:

  • Weak Narrow AI
  • Strong Narrow AI
  • Weak General AI
  • Strong General AI
  • Super AI

Weak narrow AI (WNAI) is AI that’s almost indistinguishable from analog mechanical systems. Go to the local dollar store and buy a $1 calculator. That calculator possesses WNAI. Start your computer. All the little algorithms that keep your OS and all the apps running are WNAI. This sort of AI cannot improve upon itself meaningfully, even if it were programmed to do so. And that’s the keyword— “programmed.” You need programmers to define every little thing a WNAI can possibly do.

We don’t call WNAI “AI” anymore, as per the AI Effect. You ever notice when there’s a big news story involving AI, there’s always a comment saying “This isn’t AI; it’s just [insert comp-sci buzzword].” Problem being, it is AI. It’s just not AGI.
I didn’t use that mention of analog mechanics passingly— this form of AI is about as mechanical as you can possibly get, and it’s actually better that way. Even if your dollar store calculator were an artificial superintelligence, what do you need it to do? Calculate math problems. Thus, the calculator’s supreme intellect would go forever untapped as you’d instead use it to factor binomials. And I don’t need ASI to run a Word document. Maybe ASI would be useful for making sure the words I write are the best they could possibly be, but actually running the application is most efficiently done with WNAI. It would be like lighting a campfire with Tsar Bomba.
Some have said that “simple computation” shouldn’t be considered AI, but I think it should. It’s simply “very” weak narrow AI. Calculations are the absolute bottom tier of artificial intelligence, just as the firing of synapses are the absolute bottom of biological intelligence.
WNAI can basically do one thing really well, but it cannot learn to do it any better without a human programmer at the helm manually updating it regularly.

 

Strong narrow AI (SNAI) is AI that’s capable of learning certain things within its programmed field. This is where machine learning comes in. This is the likes of Siri, Cortana, Alexa, Watson, some chatbots, and higher-order game AI, where the algorithms can pick up information from their inputs and learn to create new outputs. Again, it’s a very limited form of learning, but learning’s happening in some form. The AI isn’t just acting for humans; it’s reacting to us as well, and in ways we can understand. SNAI may seem impressive at times, but it’s always a ruse. Siri might seem smart at times, for example, but it’s also easy to find its limits because it’s an AI meant for being a personal virtual assistant, not your digital waifu ala Her. Siri can recognize speech, but it can’t deeply understand it, and it lacks the life experiences to make meaningful talk anyhow. Siri might recognize some of your favorite bands or tell a joke, but it can’t also write a comedic novel or actually genuinely have a favorite band of its own. It was programmed to know these things, based on your own preferences. Even if Siri says it’s “not an AI”, it’s only using preprogrammed responses to say so.
SNAI can basically do one thing really well and can learn to do that thing even better over time, but it’s still highly limited.

 

Weak general AI (WGAI) is AI that’s capable of learning a wide swath of things, even things it wasn’t necessarily programmed to learn. It can then use these learned experiences to come up with creative solutions that can flummox even trained professional humans. Basically, it’s as intelligent as a certain creature— maybe a worm or even a mouse— but it’s nowhere near intelligent enough to enhance itself meaningfully. It may be par-human or even superhuman in some regards, but it’s sub-human in others. This is what we see with the likes of DeepMind— DeepMind’s basic algorithm can basically learn to do just about anything, but it’s not as intelligent as a human being by far. In fact, DeepMind wasn’t even in this category until they began using the differentiated neural computing system because it could not retain its previously learned information. Because it could not do something so basic, it was squarely strong narrow AI until literally a couple months ago.
Being able to recall previously learned information and apply it to new and different tasks is a fundamental aspect of intelligence. Once AI achieves this, it will actually achieve a modicum of what even the most cynical can consider “intelligence.”
DeepMind’s yet to show off the DNC in any meaningful way, but let’s say that, in 2017, they unveil a virtual assistant to rival Siri and replace Google Now. On the surface, this VA seems completely identical to all others. Plus, it’s a cool chatbot. Quickly, however, you discover its limits— or, should I say, its lack thereof. I ask it to generate a recipe on how to bake a cake. It learns from the Internet, but it doesn’t actually pull up any particular article— it completely generates its own recipe, using logic to deduce what particular steps should be followed and in what order. That’s nice— now, can it do the same for brownies?
If it has to completely relearn all of the tasks just to figure this out, it’s still strong narrow AI. If it draws upon what it did with cakes and figures out how to apply these techniques to brownies, it’s weak general AI. Because let’s face it— cakes and brownies aren’t all that different, and when you get ready to prepare them, you draw upon the same pool of skills. However, there are clear differences in their preparation. It’s a very simple difference— not something like “master Atari Breakout; now master Dark Souls; now climb Mount Everest.” But it’s still meaningfully different.
WGAI can basically do many things really well and can learn to do them even better over time, but it cannot meaningfully augment itself. That it has such a limit should be impressive, because it basically signals that we’re right on the cusp of strong AGI and the only thing we lack is the proper power and training.

 

Strong general AI (SGAI) is AI that’s capable of learning anything, even things it wasn’t programmed to learn, and is as intellectually capable as a healthy human being. This is what most people think of when they imagine “AI”. At least, it’s either this or ASI.
Right now, we have no analog to such a creation. Of course, saying that we never will would be as if we were in the year 1816 and discussing whether SNAI is possible. The biggest limiting factor towards the creation of SGAI right now is our lack of WGAI. As I said, we’ve only just created WGAI, and there’s been no real public testing of it yet. Not to mention that the difference between WGAI and SGAI is vast, despite seemingly simple differences between the two. WGAI is us guessing what’s going on in the brain and trying to match some aspects of it with code. SGAI is us building a whole digital brain. Not to mention there’s the problem of embodied cognition— without a body, any AI would be detached from nearly all experiences that we humans take for granted. It’s impossible for an AI to be a superhuman cook without ever preparing or tasting food itself. You’d never trust a cook who calls himself world-class, only come to find out he’s only ever made five unique dishes, nor has he ever left his house. For AI to truly make the leap from WGAI to SGAI, it’d need someone to experience life as we do. It doesn’t need to live 70 years in a weak, fleshy body— it could replicate all life experiences in a week if needbe if it had enough bodies— but having sensory experiences helps to deepen its intelligence.

 

Super AI or Artificial Superintelligence (SAI or ASI) is the next level beyond that, where AI has become so intellectually capable as to be beyond the abilities of any human being.
The thing to remember about this, however, is that it’s actually quite easy to create ASI if you can already create SGAI. And why? Because a computer that’s as intellectually capable as a human being is already superior to a human being. This is a strange, almost Orwellian case where 0=1, and it’s because of the mind-body difference.
Imagine you had the equivalent of a human brain in a rock, and then you also had a human. Which one of those two would be at a disadvantage? The human-level rock. And why? Because even though it’s as intelligent as the human, it can’t actually act upon its intelligence. It’s a goddamn rock. It has no eyes, no mouth, no arms, no legs, no ears, nothing.
That’s sort of like the difference between SGAI and a human. I, as a human, am limited to this one singular wimpy 5’8″ primate body. Even if I had neural augmentations, my body would still limit my brain. My ligaments and muscles can only move so fast, for example. And even if I got a completely synthetic body, I’d still just have one body.
An AI could potentially have millions. If not much, much more. Bodies that aren’t limited to any one form.
Basically, the moment you create SGAI is the moment you create ASI.
From that bit of information, you can begin to understand what AI will be capable of achieving.

Recap:
“Simple” Computation = Weak Narrow Artificial Intelligence. These are your algorithms that run your basic programs. Even a toddler could create WNAI.

Machine learning and various individual neural networks = Strong Narrow Artificial Intelligence. These are your personal assistants, your home systems, your chatbots, and your victorious game-mastering AI.

Deep unsupervised reinforcement learning + differentiable spiked recurrent progressive neural networks = Weak General Artificial Intelligence. All of those buzzwords come together to create a system that can learn from any input and give you an output without any preprogramming. 

All of the above, plus embodied cognition, meta neural networks, and a master neural network = Strong General Artificial Intelligence. AGI is a recreation of human intelligence. This doesn't mean it's now the exact same as Bob from down the street or Li over in Hong Kong; it means it can achieve any intellectual feat that a human can do, including creatively coming up with solutions to problems just as good or better than any human. It has sapience. SGAI may be very humanlike, but it's ultimately another sapient form of life all its own.

 

All of the above, plus recursive self-improvement = Artificial Superintelligence. ASI is beyond human intellect, no matter how many brains you get. It's fundamentally different from the likes of Einstein or Euler. By the very nature of digital computing, the first SGAI will also be the first ASI.

 

 

Edit: 

I mentioned in a status update a few days back that I revised this list a bit. What is that revision?

 

Thanks be unto AlphaGo, for it is due to it that I've decided to bunch together weak and strong narrow AI. And why? After observing AlphaGo in greater detail and doing more reading into machine learning, I realized that what I called "strong" narrow AI at first is not that much different from what I considered "weak". There's very narrow parameters for learning set down even for the likes of Siri.

 

"Strong narrow AI" requires a general learning network. Functionally, it's identical to weak narrow AI. The difference is that it can be trained extensively to master specialized tasks just as if it were a WNAI, but you could use the code to train it on something else on a new network. This is what we saw with AlphaGo— the basic network behind it is the same that DeepMind uses for its other tasks like mastering Atari games. But because it has no way of transferring what it's learned or utilizing that knowledge intelligently (i.e. through a meta-neural network system), it's not AGI. Not even WGAI. You can't teach AlphaGo how to play Atari games, for example. Thus, it's a "strong narrow AI". The parameters for learning are much wider than ever before.

Now I know I'm wrong when I said that DeepMind's network can't transfer knowledge, but that's still a very new development and we have yet to see it in action.


  • wjfox, Maximus and hiraeth like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#2
matthewpapa

matthewpapa

    Member

  • Members
  • PipPipPip
  • 98 posts
  • LocationUSA

Makes sense, thanks for posting.

 

Those personal assistants (SNAI) seem damn near useless to me for practical uses. Unless I am in the car or otherwise am unable to type a query and read the results.

 

The average person thinks these assistants are a lot "smarter" than they really are. They fail to realize that this is only because know a lot about you (prior experience, scanning emails, location data etc) from data they maintain or are good at systematically parsing the first 1-3 google search results. But they are incapable of producing any novel/unique solutions for themselves.

 

I am EAGERLY awaiting WGAI. Only then will the revolution really be here. For now we are just getting a little taste. Even then I wonder if differentiated neural computing is good enough for useful and practical WGAI. I hope so...



#3
FrogCAT

FrogCAT

    Member

  • Members
  • PipPipPip
  • 74 posts
  • LocationA Virtual Worlf

Awesome, this will help loads when trying to explain AI to my less than interested family members.

Also, I'm taller than you? That's weird.


"That's me inside your head."   "I wanna need your love…  I’m a broken rose,   I wanna need your love…"   "And when we fall, we will fall together."


#4
KingJames69

KingJames69

    Member

  • Members
  • PipPip
  • 14 posts

We have become acutely aware of the power that (WGAI) holds.  The human brain has a huge capacity for information storage for its size.  The computer has an enormous capacity for calculation and modeling.  When we are able to produce something that can be as vast and adaptable as the human mind, and as fast as a computer, we will see the end of human drudgery and the beginning of the next phase of our evolution.  For this AI to work, however, we must build a robust infrastructure in order to realize its awesome potential.  We need to allow the AI to permeate into all aspects of the world, and not just the traditionally tech-heavy areas of our lives. 



#5
Johnny Bahama

Johnny Bahama

    New Member

  • Members
  • Pip
  • 6 posts

What I'm curious about is the timeline for reaching each level. I've heard a lot of different opinions, but hopefully AGI happens within the next 100 years. I mean hell, the rate at which we adapt to new technologies is so quick, we probably wont even notice when they start to roll out the weak AGIs for consumers. What do you guys think? Is the creation of genuine intelligence something we can grasp in the near future? I sure as hell hope so, it seems like were at a turning point in humanity and I for one wanna make it over the hump and see what AI can bring.



#6
wjfox

wjfox

    Administrator

  • Administrators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,939 posts
  • LocationLondon

What I'm curious about is the timeline for reaching each level. I've heard a lot of different opinions, but hopefully AGI happens within the next 100 years. I mean hell, the rate at which we adapt to new technologies is so quick, we probably wont even notice when they start to roll out the weak AGIs for consumers. What do you guys think? Is the creation of genuine intelligence something we can grasp in the near future? I sure as hell hope so, it seems like were at a turning point in humanity and I for one wanna make it over the hump and see what AI can bring.

 

We're going to reverse-engineer the human brain. The first strong AGI will likely emerge before 2030.

 

To give you an idea of the progress being made in simulation complexity:

 

2005 = A single neuron model

2008 = Neocortical column with 10,000 neurons

2011 = Cortical mesocircuit featuring 100 neocortical columns

2014 = Rat brain with 100 mesocircuits


  • Yuli Ban and Johnny Bahama like this

#7
Johnny Bahama

Johnny Bahama

    New Member

  • Members
  • Pip
  • 6 posts

 

What I'm curious about is the timeline for reaching each level. I've heard a lot of different opinions, but hopefully AGI happens within the next 100 years. I mean hell, the rate at which we adapt to new technologies is so quick, we probably wont even notice when they start to roll out the weak AGIs for consumers. What do you guys think? Is the creation of genuine intelligence something we can grasp in the near future? I sure as hell hope so, it seems like were at a turning point in humanity and I for one wanna make it over the hump and see what AI can bring.

 

We're going to reverse-engineer the human brain. The first strong AGI will likely emerge before 2030.

 

To give you an idea of the progress being made in simulation complexity:

 

2005 = A single neuron model

2008 = Neocortical column with 10,000 neurons

2011 = Cortical mesocircuit featuring 100 neocortical columns

2014 = Rat brain with 100 mesocircuits

 

Makes sense, I'll be looking forward to it. I can't even imagine intelligence with the memory and speed of computing mixed with the complexity and learning ability of the human brain. The world would change overnight.



#8
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

I mentioned in a status update a few days back that I revised this list a bit. What is that revision?

 

Thanks be unto AlphaGo, for it is due to it that I've decided to bunch together weak and strong narrow AI. And why? After observing AlphaGo in greater detail and doing more reading into machine learning, I realized that what I called "strong" narrow AI at first is not that much different from what I considered "weak". There's very narrow parameters for learning set down even for the likes of Siri.

 

"Strong narrow AI" requires a general learning network. Functionally, it's identical to weak narrow AI. The difference is that it can be trained extensively to master specialized tasks just as if it were a WNAI, but you could use the code to train it on something else on a new network. This is what we saw with AlphaGo— the basic network behind it is the same that DeepMind uses for its other tasks like mastering Atari games. But because it has no way of transferring what it's learned or utilizing that knowledge intelligently (i.e. through a meta-neural network system), it's not AGI. Not even WGAI. You can't teach AlphaGo how to play Atari games, for example. Thus, it's a "strong narrow AI". The parameters for learning are much wider than ever before.

Now I know I'm wrong when I said that DeepMind's network can't transfer knowledge, but that's still a very new development and we have yet to see it in action.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#9
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

 

 

What I'm curious about is the timeline for reaching each level. I've heard a lot of different opinions, but hopefully AGI happens within the next 100 years. I mean hell, the rate at which we adapt to new technologies is so quick, we probably wont even notice when they start to roll out the weak AGIs for consumers. What do you guys think? Is the creation of genuine intelligence something we can grasp in the near future? I sure as hell hope so, it seems like were at a turning point in humanity and I for one wanna make it over the hump and see what AI can bring.

 

We're going to reverse-engineer the human brain. The first strong AGI will likely emerge before 2030.

 

To give you an idea of the progress being made in simulation complexity:

 

2005 = A single neuron model

2008 = Neocortical column with 10,000 neurons

2011 = Cortical mesocircuit featuring 100 neocortical columns

2014 = Rat brain with 100 mesocircuits

 

Makes sense, I'll be looking forward to it. I can't even imagine intelligence with the memory and speed of computing mixed with the complexity and learning ability of the human brain. The world would change overnight.

 

Oh, then you're going to love this bit of news:

 

IBM’s Artificial Brain Has Grown From 256 Neurons to 64 Million Neurons in 6 Years – 10 Billion Projected by 2020


  • Casey and Alislaws like this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#10
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 691 posts
  • LocationLondon

 

Assuming doubling every 3 years (to allow for some slow down as we move to post Moore's law and into new computing paradigms): 

 

That results in approx 160 billion neurons in 2032

 

So, if there are approx 100 billion neurons in the human brain...


  • Yuli Ban likes this

#11
LWFlouisa

LWFlouisa

    Member

  • Members
  • PipPipPip
  • 95 posts
  • LocationNashChat

This was the other thread that tempted me to join. Would something like what I'm programming fall outside of AI? One example. A file browser that picks from a random emotion with a matching catch phrase tailored to whatever file you're wanting browse.

 

Still having trouble installing PocketSphinx.

 

Another example, an AI that's suppose to be a clone of your personality, and you travel in a virtual world to 1. browse Onion browsers, 2. browse the Luna network offline using prefabulated Sneaker net HTML pages. Organization of the files is based around Cerebrum and Cerebellum structure and I'll even way a way to use the brain stem as a relay point for what I call a "Brain Browser".


Cerebrum Cerebellum -- Speculative Non-Fiction -- Writing

Luna Network -- Nodal Sneaker Network -- Programming

Published Works: https://www.wattpad.com/432077022-tevun-krus-44-sword-planet-the-intergalactic-heads


#12
_SputnicK_

_SputnicK_

    Member

  • Members
  • PipPipPip
  • 61 posts
  • LocationUSA

A file browser that picks from a random emotion with a matching catch phrase tailored to whatever file you're wanting browse.

What?


Artificial intelligence will reach human levels by around 2029.

Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.
-Ray Kurzweil


#13
garry

garry

    New Member

  • Members
  • Pip
  • 2 posts
Types of artificial intelligence (shortly):
There are several types of artificial intelligence and one can sing out three main categories:
1) Artificial Narrow Intelligence (ANI). It represents the AI ​​that specializes in one certain field. For example, it can beat world chess champions at a chess party. But it is everything that AI is capable of.
2) Artificial General Intelligence (AGI). This AI represents a computer that has intelligence similar to the human brain. It can perform all tasks. So it can explain tasks, schedule workload, solve problems, think abstractly, compare ideas, learn quickly, and use accumulated experience.
3) Artificial Superintelligence (ASI). This is an AI system that surpasses the human brain in almost every field of knowledge including scientific inventions, general knowledge, and social skills.
At the moment. AGI familiarization. But while AGI is far from us, we can say a lot about ANI. And below you see some ways to apply ANI in real life.


#14
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,070 posts
  • LocationAnur Margidda

I've been meaning to redo this list. It's really not good enough at all because I realized that things I listed under "weak-narrow AI" still have superhuman intelligence in their domains. And I redefined strong-narrow AI as meaning "generalized network that learns like a narrow AI" when strength should really be about comparisons to human intelligence.

 

 

So I decided to add a new category: "Hybrid AI". Let me see if I can redesign my list.

 

 

Weak and Strong refers to the strength in comparison to Human intelligence. We use humans because we are the only sapient tool-making civilized animals we know, and we're also the ones creating AI in the first place. By itself, a "strong AI" shouldn't mean "human-level AI". 

 

Narrow and General refers to the range of things a system network can do or learn. "Narrow" means it's constructed for a single purpose and cannot be repurposed into anything else. "General" means it's multipurpose on a wide scale.

 

Hybrid's the one I've been meaning to introduce. A hybrid network uses general techniques for narrow purposes, just like DeepMind's system. DeepMind's network is allegedly a general-purpose learning system. "General-purpose" does not necessarily equal "general", because there hasn't been some sort of geodesic wave of nanocomputronium spreading from DeepMind's headquarters or anything like that. It just means that they have an algorithm that can be trained to be applied to anything. But once its domain is applied to a task, that's it.

 

DeepMind's network has mastered Atari games as well as Go, but you can't make either of them play Tic-Tac-Toe. Not without loading up a new computer with the algorithm and teaching it, step-by-step, how to play and master Tic-Tac-Toe. And once you have this DeepTicTacToe playing machine, you can't get it to play Atari games or Go. It only knows how to play Tic-Tac-Toe.

 

 

Weak Narrow AI = narrow AI that is sub-human in its ability to carry out tasks. It ranges from completely worthless to job-stealing in terms of capability. But it never reaches par-human or super-human levels. 

Strong Narrow AI = narrow AI that is at least par-human. 

 

You hardcode narrow AI. It's like using concrete. You can mold it into what you want beforehand, but you have to wait for it to dry before you can actually use it. And once it dries, you're stuck with that shape. If you want a concrete wheel, you then set out to create a wheel made of concrete. And once you've created it, you can't refashion it into anything else without fundamentally changing what it is.

Likewise, once you code narrow AI, you're stuck with what you have. Only you can make improvements. And by "you", I mean programmers in general. The AI itself can't do anything to alter its programming unless it were programmed to do so, and even if it were programmed for such, it would only follow the rules the programmer set down. Thus, it can never consciously make itself smarter, even if it's programmed to improve itself. That's the keyword: programmed. Even when you utilize machine learning, narrow AI can only learn within pre-set parameters. 

 

Some examples of narrow AI:

 

  • Personal assistants like Siri, Cortana, Alexa, etc.
  • Wolfram Alpha
  • IBM Watson
  • "Smart filter" tools
  • Spam blockers
  • Speech-to-text recognition
  • Natural language processing
  • Google's RankBrain; most search engines in general
  • IBM Deep Blue
  • Video game AI
  • Just about anything marketed as "AI" today, in fact.

 

Weak Hybrid AI = generalized AI that's specialized for a single task, but is sub-human in its power. It can't really transfer what it learned with one task to help accomplish another, different task. Even if it's a similar task. But it's also possible that you could get transfer learning down to pat alongside memory recall (progressive neural networks, differentiable neural computers) but still have hybrid AI.  In essence, it would be general AI in only a single domain (e.g. it can learn how to bake cakes and brownies and various kinds of foods, but it can't learn how to wash dishes).

Strong Hybrid AI = generalized AI that's specialized for a single task, but is par-human or superhuman in its power. DeepMind got here first, as far as I know. 

 

Unlike Narrow AI, Hybrid AI can be seen as being made of construction-strength Play-Doh. You have a formless mass that can essentially be formed into anything. Any shape. Once you have that shape created, that shape is set... until you crush it back into a formless mass. And if needed, you can tweak the shape even when it's fully finished since Play-Doh is so malleable. 

DeepMind, presumably OpenAI, and Baidu are here, at the bleeding edge of what's possible with artificial intelligence. 

 

Weak General AI = general AI that's capable of learning a wide range of tasks, but it does so in a many that's sub-human. It can learn how to drive a car, make a bowl of soup, write an essay, and judge photos, but that's roughly it or it can only do these things poorly. Or it can do one or a few of those tasks really well if you focus it on them but struggles with everything else. It's not self-aware or conscious or anything. And even with it having general learning capabilities, it'll still be very hard to get it to improve itself meaningfully. This is probably the point at which the infamous "Paperclip Maximizer" is most possible, only by going off of the stated situation of the scenario. Said AI still needs the intelligence to understand how to create paperclips, after all, as well as the ability to understand that humans could get in the way of making as many paperclips as possible. But it's too stupid to understand that if it turns the universe into paperclips, it will have no one to use them. That sounds like WGAI.

Strong General AI = general AI that's capable of learning an unlimited range of tasks, and does so in a way that's par-human and super-human to an extent. This is also likely where you see artificial self-awareness and artificial consciousness. Very esoteric things, almost New Age in a way. In being par-human, that means it has a similar range of emotional intelligence as well. At the very least, it can simulate said emotions and simulate things like empathy and hatred. It should be noted that any silicon computer that possesses SGAI is, at the same time, superintelligent compared to humans. This is due to basic biological differences (in that SGAI will have no biology, while we are limited by ours). Even an electronic silicon computer will have computations occurring at thousands of miles per hour thanks to the sheer speed of elections— our 

chemical brain sends signals at a maximum of 200 miles per hour. This difference grows even starker with photonic computers of superior materials (e.g. graphene) and all but impossible for biological humans to overcome with quantum computers.

 

The difference between Weak and Strong General AI is greater than the difference between Weak-Narrow AI and Weak-General AI. This is because of the sheer amount of power needed to turn a general AI network into the equivalent of a human-level intelligence. 

When we develop general-purpose utility and service robots, most will almost certainly be of the WGAI variety. SGAI is too resource-intensive and has too few utilitarian uses outside of a few special tasks (i.e. "human-centric" jobs and philosophical/scientific fields).

 

Artificial Superintelligence = general AI that's of an intelligence so far beyond human capabilities that no human alive or theoretically possible to live (including supersavants and hypergeniuses of impossible standard deviations beyond the mean) can come close to rivaling their mind.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#15
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,174 posts

 

 

Artificial Superintelligence = general AI that's of an intelligence so far beyond human capabilities that no human alive or theoretically possible to live (including supersavants and hypergeniuses of impossible standard deviations beyond the mean) can come close to rivaling their mind.

 

This seems like the holy grail for cutting edge AI researchers.

 

Still, this observation does give me the opportunity to repeat something I have written in the past in other threads.  That is to say that I think it is a mistake to suppose that Artifical Superintelligence will automatically arrive at a "higher" level of morality.

 

If one is really and truly atheist, then one realizes there is no omniscient and loving God looking over us all.  The natural world is indifferent to the plight of humans.  If you survive and reproduce, then you are simply replaced by a new generation.  If your species dies off, well the natural world does not "care". 

 

 So, a "Superintelligence"  must be given certain commands that "narrow" its focus.  Things like "please do not implement plots to destroy all of humanity."  

 

"Strong narrow" intelligence can be very good at producing models of how the present climate is likely to evolve.  Of course, as we are learning the hard way, humans may reject a course of action that the results of such an applied "narrow" intelligence might produce.

 

Of course, one solution that might immediately leap to mind is "apply strong narrow intelligence to persuading humans to take a rational course of action.'

 

An obvious solution.  However, what this formulation misses is the challenge posed by folks like Karl Marx and his predecessors.   That is, that folks who "own the means of production" - in this case such a super AI - will have a natural advantage in promoting "their philosophy".  So an AI "superintelligence" is in that sense just another iteration down a long and winding path.

 

Now, an AI superintelligence  might "persuade" its owner/master to take a more "logical" and "ethical" path. For that to happen:

 

  • the owners of said AI need to be open to that possibility
  • humans need to know why the AI is suggesting what it is suggesting.  That is, what programmed goal or "question" it is trying to answer.  

A part of that also goes back to was once a tired and worn out cliche but now seems half forgotten:

 

Gabage In, Garbage Out.  Also known as GIGO.

 

So, a lot also depends upon the accuracy of information that AI is "fed".  Bias on the part of those responsible for feeding information to the AI in question can thus produce garbage. A very dangerous form of garbage at that.

 

Any thoughts on this line of reasoning?


  • Yuli Ban likes this

The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls






Also tagged with one or more of these keywords: artificial intelligence, weak artificial intelligence, narrow AI, strong AI, AGI, artificial superintelligence, deep learning, DeepMind, AI, Singularity

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users