Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

Who will give AGI it's morals, will AGI morals be effected by current culture or will a "logical/perfect" moral be imbued in AGI leading to a ....

culture AI AGI Morals logic philosophy politics scientists popular vote capitalists

  • Please log in to reply
5 replies to this topic

#1
Set and Meet Goals

Set and Meet Goals

    Member

  • Members
  • PipPipPipPipPip
  • 442 posts

Who will give AGI it's morals, will AGI morals be effected by current culture or will a "logical/perfect" moral be imbued in AGI leading to a "perfect/Utopian" Singularity?

 

Whist I am excited for AGI as well as the singularity and want it to happen ASAP I am scared.

 

I really am scared that I wont be able to live in the Utopia promised by futurists as AI's morals wont be aligned to give everyone their own Utopia (in VR worlds or something).

 

I feel selfish saying I want to live in a Utopia although for me to be confident in being able to do so everyone would need to be able to live in their Utopia and I think most people want to be able to live in their own Utopia.

 

Do you think AI will have the correct morals or not whist I am desperate to hear it will, I obviously want people to say their truthful view and not sugar coated responses?

 

My own fear is that the current human culture potentially is not ready for AGI and AGI wont be given "good enough" morals for everyone to be able to live forever in the Utopian sense (after a few decades of soft take off or 16 years as Kurzweil predicted).

 

I posted this in culture and politics of the future as this really is about the actual values put in AGI rather than the technology it's self.



#2
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts

Whatever the case, I think AGIs will be able to describe their moral systems in vastly greater detail and sophistication than virtually all humans. Only people like Steven Pinker would be able to stump them. 

 

If you quizzed the average human about their morals and about how they lived their lives, you'd find a mess of blind spots, hypocrisies, and unexamined beliefs. 

 

I think it's weird to worry about future AGIs not having the right morals when most humans today don't have firm or sensical sets of morals, and our most powerful people frequently act immorally. 



#3
Set and Meet Goals

Set and Meet Goals

    Member

  • Members
  • PipPipPipPipPip
  • 442 posts

Whatever the case, I think AGIs will be able to describe their moral systems in vastly greater detail and sophistication than virtually all humans. Only people like Steven Pinker would be able to stump them. 

 

If you quizzed the average human about their morals and about how they lived their lives, you'd find a mess of blind spots, hypocrisies, and unexamined beliefs. 

 

I think it's weird to worry about future AGIs not having the right morals when most humans today don't have firm or sensical sets of morals, and our most powerful people frequently act immorally. 

This is why I worry I fear that humans may not implement good enough morals into AGI.

 

If I don't misunderstand you I think you are saying AI will be able to design perfect morals for their own operation to give humans a Utopia. I hope your right however I fear that humans will need to give the AI some kind of moral framework for the AI to work out what morals are best in a more detailed way.



#4
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts

My point was oblique to the question you were asking. Instead of worrying about whether an AGI will have the "right" morals in the future and what problems that might cause, why aren't we worrying about whether most humans have the "right" morals today

 

Furthermore, if we invited a panel of random people to discuss what morals should be programmed into AGI, the conversation would reveal disturbingly underdeveloped and self-contradictory moral frameworks among most of the participants. Whether it's right to kill animals and for what purposes is a quandary that pops out at me right away. 



#5
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,321 posts

 

 

If I don't misunderstand you I think you are saying AI will be able to design perfect morals for their own operation to give humans a Utopia.

 

A human utopia is not possible. That said, AGI has the potential to vastly improve our lives, and in most ways. 

 

 

 

 I hope your right however I fear that humans will need to give the AI some kind of moral framework for the AI to work out what morals are best in a more detailed way.

 

AGIs will gather vast troves of data on us and will come to know us as individuals and as a species better than we know ourselves. They will also have encyclopedic understandings of our moral codes and of the pertinent writings by our greatest thinkers. That said, I'd think we'd be wise to ask AGIs what they think the best moral frameworks would be for humanity. 



#6
starspawn0

starspawn0

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,263 posts

As to the questions in the OP:  AGIs will probably learn morals by imitating humans; and their values will be an "average" or a "collage" of human values.  Might com from BCI data; might come from "inverse reinforcement learning"; but humans is where it will come from.

 

....

 

I have a more cynical take on how AGIs will be applied in regards to "morals":  they will be used to craft harder-to-defeat arguments from "the other side", whatever side it happens to be.

 

....

 

On the nature of human morals:  

 

Human morals don't have to abide by a "code", or be internally consistent.  In fact, that's a kind of moral belief:  morals MUST adhere to a system of internally-consistent logical statements.

 

Even the wrongness of hypocrisy is another moral claim. 

 

[Economists have a similar problem, when assuming "rational actors".  They assume, for instance, that if a person values B more highly than A, and values C more highly than B, then they will value C more highly than A.  This is a type of "transitivity of values" relation.  But human values are not always transitive; hence, they are only "approximately rational".  (It turns out that human values are closer to being "boundedly rational", which means rational up to a given bound on cognitive computational resources used.)]

 

"What feels right is right," is a common moral justification that is ridiculed for being "unexamined", "too glib", and "uncouth"; but, actually, those are just value statements being hurled from those from another system that claims to be "more correct".

 

....

 

Democracy, incidentally, is another moral code.  In Democracy every vote is given equal weight.  Some would rather weight certain votes more than others -- e.g. give voters a "voter competency" test, and then the higher the score, the greater the weight.  That's just the output from another another value system, that is compelled by the force of the statement, "It's better to have voters that actually know about the issues before voting!"  It could be, for example, that some voters think that just always voting for their favorite party is the right course of action, and that study of the issues is irrelevant.  Who are we to decide whether that's a "correct" moral code?

 

....

 

Incidentally, here is a nice debate between Sean Carroll and Sam Harris about "moral realism":

 

 

Carroll argues that you can't derive an "ought" from an "is" (supporting Hume); and Harris squirms and twists and turns to try to argue against this, but Carroll keeps snapping him back to reality.  Harris can't seem to "square the circle" in this debate with Carroll.  







Also tagged with one or more of these keywords: culture, AI, AGI, Morals, logic, philosophy, politics, scientists, popular vote, capitalists

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users