Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

How can we confirm self improving human level AGI will be beneficial/Utopian?

AI AGI Singularity future ethics

  • Please log in to reply
7 replies to this topic

#1
Set and Meet Goals

Set and Meet Goals

    Member

  • Members
  • PipPipPipPipPip
  • 442 posts

I presume we have reached the point where the AI (and futurist community) think human level AGI is possible with a confidence interval over 80% almost confirming its possible.

 

How are we going to almost confirm the singularity (Human level AGI) will be beneficial or even better utopian for all (after some years of the AI developing the economy, science etc) ?



#2
Revolutionary Moderate

Revolutionary Moderate

    Member

  • Members
  • PipPipPip
  • 56 posts
  • LocationMassachusetts, United States

I think the creation of an AGI would be beneficial to everyone since it would be able to solve problems that humanity has struggled to solve. 


The Potato Praiser 


#3
Kynareth

Kynareth

    Member

  • Members
  • PipPipPipPipPip
  • 273 posts

We don't know. If AI improves exponentially (look at GPT-3, MuZero or Boston Dynamics robots) and humans don't, then who is going to rule over who? Usually, the stronger rule over the weaker. We need ways to improve ourselves. If we keep being this dumb, then robots are going to become our overlords and who knows what they will decide to do...



#4
Whereas

Whereas

    Member

  • Members
  • PipPipPipPipPip
  • 492 posts

MIRI (formerly the Singularity Institute for Artificial Intelligence) have had ideas. If you could define benevolence mathematically, then even for an AGI that learns and evolves, it might be possible to set up its cognitive framework in such a way that it would then be mathematically impossible for it to end up with a mental outlook that would fall outside the definition of benevolence.

Unfortunately in the middle of a global AI arms race there are plenty of sides that won't be willing to wait, say, a few extra decades for the methods to be developed to ensure an AGI is safe. Is that going to be *the* mistake that ends up being humanity's downfall? Possible.


If you're wrong, how would you know it?


#5
Tempus

Tempus

    Member

  • Members
  • PipPip
  • 10 posts

We simply cannot. Absolute certainty is impossible, and to pretend that such a fabulous tool as an AGI device is not used for unethical purposes is to be naive. In the future, there will be countless scenarios associated with the use of AGI / ASI devices, some better than others, all coexisting at the same time, some will prosper, others will not, and some will evolve in ways that we cannot even imagine now.

Perhaps in the distant future, on a small planetoid orbiting a red dwarf, a device that evolved from a human-made ASI will find answers to the questions that haunted its creators so much.



#6
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,515 posts
  • LocationRaleigh, NC

We don't know. If AI improves exponentially (look at GPT-3, MuZero or Boston Dynamics robots) and humans don't, then who is going to rule over who? Usually, the stronger rule over the weaker. We need ways to improve ourselves. If we keep being this dumb, then robots are going to become our overlords and who knows what they will decide to do...

 

I will welcome the AI overlords! It's just evolution in its natural course.

 

If you can't fight 'em, join 'em!


What are you without the sum of your parts?

#7
StanleyAlexander

StanleyAlexander

    Saturn, the Bringer of Old Age

  • Members
  • PipPipPipPipPipPip
  • 986 posts
  • LocationPortland, Oregon, USA, Earth, 2063

I'm still holding out hope that AGI will be the thing that saves us from ourselves, so I'm with Raklian here. Bring it on.

 

“In the game of life, there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” -George Dyson

 

Unfortunately, you can't predict how a superior intelligence will behave, pretty much by definition. So the only way to know will be to try it and see.

 

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky


Humanity's destiny is infinity

#8
andmar74

andmar74

    Member

  • Members
  • PipPip
  • 27 posts
  • LocationDenmark

It looks like there are many more AGI's that will be a threat to humanity, directly or indirectly, than possible friendly AGI's.

For example an AGI that wants to optimize production of some quantity, might want to take alle the ressources away from us, not because it wants to kill us, but because we don't matter.

So when we soon create AGI, we pick from random (maybe) from the possible states of AGI. Then we have to be extremely lucky to survive. I hope I'm wrong of course.







Also tagged with one or more of these keywords: AI, AGI, Singularity, future, ethics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users