Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Superintelligence: Science or Fiction? - featuring Ray Kurzweil, Elon Musk, Demis Hassabis, Nick Bostrom and others

Elon Musk Ray Kurzweil Demis Hassabis AI artificial intelligence deep learning AGI deep reinforcement learning superintelligence artificial superintelligence

  • Please log in to reply
12 replies to this topic

#1
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 612 posts
  • LocationUK

Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 

This group of people is a futurologists dream.


  • Yuli Ban, FrogCAT and Infinite like this

#2
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

It'd be awesome they do this event again with these people with one more panelist there - a robot operated by AGI. It'll probably be so intellectually intimidating to render the other guys looking like blathering idiots. Yep, even those great minds.

 

Then, what does that leave for the average minds? I'm nervous to even think about it.


What are you without the sum of your parts?

#3
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

An AGI is an entity with roughly the intellectual capability of a human. You're confusing it with ASI.


Click 'show' to see quotes from great luminaries.

Spoiler

#4
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPip
  • 1,552 posts
  • LocationGeorgia
Artificial general intelligence is actually not synonymous with human level or even human like intelligence but refers more to am autonomous system able to make cognizant decisions. Of course the term is loosely defined and has been used in different contexts but general intelligence would be a good descriptor for even God.

The growth of computation is doubly exponential growth. 


#5
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

You and Whereas need to get your story straight.


Click 'show' to see quotes from great luminaries.

Spoiler

#6
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPip
  • 1,552 posts
  • LocationGeorgia

You and Whereas need to get your story straight.


Whereas? WTF does whereas have to do with me?

The growth of computation is doubly exponential growth. 


#7
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

An AGI is an entity with roughly the intellectual capability of a human. You're confusing it with ASI.

 

AGI is short for artificial general intelligence.

 

General intelligence is the ability to transfer learning from one domain to another, generally speaking. It is human-like but not necessarily at the level of what you would expect an average human to emulate with his current intellectual prowess.

 

It is possible to devise a general intelligence that is not even close to being human-like at all.   


What are you without the sum of your parts?

#8
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,236 posts
  • LocationIn the Basket of Deplorables

 

An AGI is an entity with roughly the intellectual capability of a human. You're confusing it with ASI.

 

AGI is short for artificial general intelligence.

 

General intelligence is the ability to transfer learning from one domain to another, generally speaking. It is human-like but not necessarily at the level of what you would expect an average human to emulate with his current intellectual prowess.

 

It is possible to devise a general intelligence that is not even close to being human-like at all.   

 

Of course. But general intelligence does not imply anything superior to human intellect. And likely modelling AGI after the template we already have is the easiest path.


Click 'show' to see quotes from great luminaries.

Spoiler

#9
Yuli Ban

Yuli Ban

    Nadsat Brat

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 17,143 posts
  • LocationAnur Margidda

Maybe this can assist in that debate?


Nobody's gonna take my drone, I'm gonna fly miles far too high!
Nobody gonna beat my drone, it's gonna shoot into the sky!

#10
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

An AGI cannot be human obviously simply because the form factor as well as the computation mechanics involved are just different between a human's brain and that of a machine. We can only say, at a certain point in the future, the processing power of an AGI is similar that of the human brain. We can even go as far as to say we've managed to emulate the human brain's processing pathways by creating an artificially intelligent platform that works just like the brain, but we all know at the end of the day, it was merely designed to work just like the human brain, but it is still NOT the human brain.

 

It's like looking at an apple and orange and saying an apple is orange-level or even mimicking an orange. What happened in here is the person making this observation arbitrarily created a certain perspective that asserts that conclusion about the apple and orange.

 

AGI and human intelligence can be compared but they are not the same thing.


  • Yuli Ban likes this
What are you without the sum of your parts?

#11
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 612 posts
  • LocationUK

I found one of the most interesting points in the discussion to be by Demis Hassabis:    
 

I just want to caveat one thing about slowing versus fast progresses, you know, it could be that, imagine there was a moratorium on AI research for 50 years, but hardware continued to accelerate as it does now. We could, you know, this is sort of what Nick's point was is that there could be a massive hardware overhang or something where an AI actually many, many, many different approaches to AI including seed AI, self-improving AI, all these things could be possible. And, you know, maybe one person in their garage could do it. And I think that would be a lot more difficult to coordinate that kind of situation. I think there is some argument to be made where you want to make fast progress when we are at the very hard point of the 's' curve. Where actually, you know, you need quite a large team, you have to be quite visible, you know who the other people are, and you know, in a sense society can keep tabs on who the major player are and what they're up to. Whereas, opposed to a scenario where in say 50 or a 100 years time when, you know, someone, a kid in their garage could create a seed AI or something like that.     

 

This is nothing new, but I think a lot of people miss this point. Firstly, he is saying that an AGI is possible if there is an improvement in hardware alone. A lot of people look at the creation of AGI as a problem similar to what many theoretical physicists have to grapple with, such as coming up with a complete theory of the universe. This is not a good comparison because even if the finest minds work on this problem for decades, it may be beyond human understanding and simply be unsolvable. Whereas, an AGI will be created given enough improvement in hardware. 

 

Secondly, he makes the point that because of this hardware overhang there is an incentive for fast progress in creating an AGI. They want to get past the goalpost before any nitwit can and hopefully apply strong safety features. 



#12
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

And then Ray Kurzweil chipped in, "AI will be used to monitor these developments by garage tinkerers," or something like that.


What are you without the sum of your parts?

#13
Zaphod

Zaphod

    Esteemed Member

  • Members
  • PipPipPipPipPipPip
  • 612 posts
  • LocationUK

Creating Human-level AI: How and When?

 

This discussion was at the same event. Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we might create human-level AI.

 







Also tagged with one or more of these keywords: Elon Musk, Ray Kurzweil, Demis Hassabis, AI, artificial intelligence, deep learning, AGI, deep reinforcement learning, superintelligence, artificial superintelligence

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users