Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

when will ai reach human level


  • Please log in to reply
13 replies to this topic

#1
Guest_Jackc_*

Guest_Jackc_*
  • Guests

same as title



#2
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 176 posts
  • LocationLost in the Delta Quadrant

you think that that specific question isn't one of the major question floating around on this forum that we all debate about and wish we knew.


  • Jakob and Nerd like this

Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#3
BasilBerylium

BasilBerylium

    Member

  • Members
  • PipPipPipPipPipPip
  • 550 posts
  • LocationArgentina

In 2039.



#4
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

If I knew the answer, the only thing stopping me from telling it to the world would be some sort of death or torture pact. Because trust me, I'd love to know.

 

At this point, it's like staring at some billowing cumulus clouds on a hot and humid summer afternoon and trying to guess if and when one of them will become a mighty storm cloud. You know it's going to happen sooner or later. It may not be today; it may not be tomorrow even. But it will happen, because that's just how it is in the summer. The thing is, you have no clue how convection works or how thermal heating creates thunderstorms, and even if you did, you couldn't point to one particular cloud and say "that cloud will be the center of the cell of storms" even if it happens to be an increasingly sizable cloud with an increasingly darkened ceiling beneath. Because clouds can dissipate. Some burst of moist air just might not coalesce right. Heating may not be sufficient in that region. Maybe aloft winds are too strong. Or maybe you're absolutely right and you get to watch a thunderstorm grow from a previously clear sky. And for all we know, that thunderstorm could be the one that grows so powerful as to become a low pressure system or even a hurricane over the sea.

 

That's what it's like right now trying to guess when strong AGI will come about. We're like those country kids staring at a mostly clear sky trying to guess which one will grow into a storm. 

 

It could be that we're not that far from it at all. That we've actually plucked all of the low-hanging fruit and that the canopy isn't actually that high either, and that we could turn on the world's first AGI sometime in the next few years. It's also possible that we're as far from AGI as Charles Babbage was from quantum computers. 

 

I've since said that there are at least five different types of AI— weak narrow, strong narrow, weak general, strong general, and superintelligence. Human-level AI falls under "strong-general AI". We currently live in a world of strong-narrow AI.

 

But here's the thing: just because we've created strong narrow AI doesn't necessarily mean we're on the cusp of even weak general AI.


  • Infinite likes this
Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#5
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,216 posts

Artificial Intelligence Is Lost in the Woods

 

https://www.technolo...t-in-the-woods/

 

Abstract:

 

I believe it is hugely unlikely, though not impossible, that a conscious mind will ever be built out of software. Even if it could be, the result (I will argue) would be fairly useless in itself. But an unconscioussimulated intelligence certainly could be built out of software–and might be useful. Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing themost important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”–which changes over the course of a day and a lifetime.

 

Without this cognitive continuum, AI has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies–which seem to underlie creativity.

 

My case for the near-impossibility of conscious software minds resembles what others have said. But these are minority views. Most AI researchers and philosophers believe that conscious software minds are just around the corner. To use the standard term, most are “cognitivists.” Only a few are “anticognitivists.” I am one. In fact, I believe that the cognitivists are even wronger than their opponents usually say.

 

 

First, I cited this article as it seems relevant to the discussion at hand.

 

Second, and at the risk of derailing this thread, I was struck by the wording of the middle paragraph.  To repeat the clause that caught my eye for emphasis:

 

 

 

 AI ... has made strikingly little progress in understanding analogies–which seem to underlie creativity.

 

What strikes me about that comment is that it also reminds me of an otherwise unrelated theme that I have discussed regarding religion, particularly Christianity.

 

I have argued that many Christians do not understand the parables of Jesus and substitute a too literal interpretation of the Bible.

 

As a corollary to that, one can also argue that many atheists do not understand the parables of Jesus, which is one reason they are atheists.

 

In a sense, parable is a fancy word for analogy.  So, the Bible can be seen as inspiring more creativity than atheists are comfortable admitting.  Many atheists also love science.  They appreciate the "just the facts" approach that science takes to studying the human condition.  Like humans, this "just the facts" approach can help us understand the details of how nature works.  Unfortunately, it can leave us struggling to understand real world problems and to "be creative" in addressing those problems.  

 

"Just the facts" approach is also like "strong-narrow AI."  Good at solving problems through modeling and brute force calculation.  Not necessarily so good at "creativity."    


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#6
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPip
  • 1,551 posts
  • LocationGeorgia

It already has... in certain narrow things it has indeed surpassed us. 


The growth of computation is doubly exponential growth. 


#7
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 611 posts
  • LocationUK

There is one thing that has confused me about Kurzweils' singularity predictions. I take the view (and most AI experts as far as I am aware) that once we create AI with human-equivalent intelligence, it will be almost instantaneous that we get a super-intelligence. If the AI has reached that point by recursively improving itself, with each iteration able to produce a better copy. Then it stands to reason that an AI even slightly more intelligent than the worlds best minds could produce an AI with even greater intelligence and so on.

 

However, Kurzweil predicts human-level AI in 2029 and the singularity in 2045. Why this 16 year gap? I think he may be accurate with either of these predictions, but in my mind the gap between the two events would be very small.



#8
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

My explanation of it takes into account the idea that reaching human-level generality and strength doesn't necessarily mean "starting the Singularity". The first SGAI will likely not have direct access to its own source code or its hardware. The Singularity in and of itself is already a bit of a hard sell (which is why my preferred version just requires stronger-than-human level AI without needing to go all New Age on us).

 

So to put it laconically:

 

Human-level AI require intelligent software. The Singularity requires infinitely-improving hardware.

 

 

The computer that reaches human-level AI will almost certainly not be increasing its intelligence exponentially. It will simply be reaching to match human-level intelligence, so its fundamental hardware won't be changed. 

Also, Kurzweil noted that computing power in 2045 will be superior to all minds on Earth, so that even if humans could somehow mesh all their minds together into one megamind (including all humans that have ever lived), we would still be inferior to the prowess of this super AI. Hence the Singularity— it's too much for us to comprehend.

 

In our popular imagining of what the Singularity will be, we seem to think that the moment a computer becomes as smart as a human, it will just magically start improving itself, like its internal circuitry just starts rewiring itself or even glowing with pure intellectual energy. It still needs to add resources to itself to do that— it can refine its computer chips at light speed, but it won't immediately be able to change its circuitry until someone can replace its internal hardware. And while it could command robots to do this, that still suggests that the Singularity won't be instantaneous. At least not until molecular nanotech becomes a thing.


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#9
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

There is one thing that has confused me about Kurzweils' singularity predictions. I take the view (and most AI experts as far as I am aware) that once we create AI with human-equivalent intelligence, it will be almost instantaneous that we get a super-intelligence. If the AI has reached that point by recursively improving itself, with each iteration able to produce a better copy. Then it stands to reason that an AI even slightly more intelligent than the worlds best minds could produce an AI with even greater intelligence and so on.

 

However, Kurzweil predicts human-level AI in 2029 and the singularity in 2045. Why this 16 year gap? I think he may be accurate with either of these predictions, but in my mind the gap between the two events would be very small.

That seems silly and cartoonish.


Click 'show' to see quotes from great luminaries.

Spoiler

#10
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 611 posts
  • LocationUK

In our popular imagining of what the Singularity will be, we seem to think that the moment a computer becomes as smart as a human, it will just magically start improving itself, like its internal circuitry just starts rewiring itself or even glowing with pure intellectual energy. It still needs to add resources to itself to do that— it can refine its computer chips at light speed, but it won't immediately be able to change its circuitry until someone can replace its internal hardware. And while it could command robots to do this, that still suggests that the Singularity won't be instantaneous. At least not until molecular nanotech becomes a thing.

 

Perhaps not instantaneous, but 16 years is a long time. Especially when you consider the improvements we have seen in computational power and AI since 2001 to now. I'm not imagining a ridiculous scenario like we see in Transcendence or Lucy where super-intelligence emerges overnight, I just don't see the time span being that long - maybe a few years.

 

My thoughts on the path to superintelligence is like the space race on steroids. There will be multiple competing companies and government-backed groups working to create the most powerful AI. Much of this will be open source and before one group develops a true human-level AI there will be countless groups that have already created AI approaching human level. It probably wont even be clear when we have reached the threshold to determine an AI to be human-level. These groups have huge resources to back them up and will likely already have the architecture in place to accommodate the self-improvement of a human level AI. It wont be the case of a single AI trying to improve itself, but hundreds or thousands of copies of this AI working together with AI researchers to produce a super AI.


  • Casey likes this

#11
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 611 posts
  • LocationUK

One thing I would add is that human-level AI is not really much of a milestone except to ourselves. There is no reason to think that it would stop at our level, especially when you consider that a human-level AI would not really be that similar to a human mind and have many advantages. 

 

Intelligence-600x472.jpg

 

Intelligence2-1024x836.png

 

https://waitbutwhy.c...volution-1.html


  • sasuke2490 and Nerd like this

#12
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 722 posts
  • LocationLondon

One thing I would add is that human-level AI is not really much of a milestone except to ourselves. There is no reason to think that it would stop at our level, especially when you consider that a human-level AI would not really be that similar to a human mind and have many advantages. 

 

I think the reason "human level" is linked so strongly to the singularity is simply the idea that once human level is exceeded, we can then build an AI that is better than any human who has ever lived at building AIs. Then we turn it on and build what it designs, then we turn that on and build what it designs etc. and pretty soon we have a superintelligence explosion. 

 

This is almost certainly a simplistic way to look at the singularity, and its possible that we will get something close to the singularity (like runaway technological progress so fast its impossible for one human to learn and understand every new advance in their field) without ever using an AI designed by an AI.


  • Jakob and Nerd like this

#13
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

 


 

This is almost certainly a simplistic way to look at the singularity, and its possible that we will get something close to the singularity (like runaway technological progress so fast its impossible for one human to learn and understand every new advance in their field) without ever using an AI designed by an AI.

Hell, we've been there for a while. As I have pointed out a few times, most of the candidates for "last man to know everything" have been dead for 200 years.


  • Alislaws and Nerd like this

Click 'show' to see quotes from great luminaries.

Spoiler

#14
techchic22

techchic22

    Member

  • Members
  • PipPip
  • 16 posts
  • LocationDublin

In my view "human level" even has its own level, and I'm not sure which one you're talking about. By saying that I mean a kid, who is 1-2 years old, can function and act as a human being but not yet completed, because they could not think or act or remember anything properly. And ai can be at this level. Or a completed human being is to feel, to sympathize, to love, to enjoy, to taste food. Or more, to think, to invent another ai. Well how does that sound? an ai invents another ai.


'The statement below is true.
The statement above is false'

:girlcrazy: :girlcrazy:





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users