Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Robotic Sentience - A Moral Debate


  • Please log in to reply
9 replies to this topic

#1
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 177 posts
  • LocationLost in the Delta Quadrant

This conversation is not to discuss whether AI can reach the level of tech to be made sentient, but what would we do with a "sentient" robot.

 

The first strong wide AI will probably not be sentient.  It will be intelligent, and capable of learning things, but it will not achieve what humans describe as sentience, and by the time it does it will already be beyond such trivial factors as emotion (which is a result of chemicals in the human body anyway).  Actually, I believe that no AI will become "humanly sentient" unless it is designed to be so.  I realize that this may not apply, as the singularity may happen so fast that society will not have to deal with this problem because by the time they even think about it, AI will already be in power.  I think that the singularity will be slow at first, and will take a few months to actually develop more AI that will reshape the world.  In this supposed grey period we may have (relatively) human level intelligence in AI.  And this gets down to the actual question.

 

Point: At what level does an AI deserve to be treated like a human.  Is it a sentient intelligence like you or me.  If not, at what level does it deserve human treatment.  What if a human brain was replicated digitally, and memories, emotions, and experiences, would that deserve human treatment.  What if an AI was produced and placed in a human body, would that deserve human treatment.  What about other biological AI, do they deserve human treatment. 

 

Counter-Point:  At what level does an AI deserve to be treated like a machine and not a human.  What if a cat brain is replicated and implanted with memories and emotions (animals have emotions, just the same as us, its just chemicals firing in your brain and body).  Does this Theoretical cat AI deserve human treatment, does it deserve cat treatment, or does deserve to be treated like a machine.  What about that grey area in between cat brain/intelligence and human brain/intelligence.

 

Could a completely new type of sentience be produced, should it even be treated superior to us.  These are the questions we need to be asking now.

 

TL;DR:  at what level does AI deserve to be treated like a human, and at what level does it deserve to be treated like a machine.


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#2
Mike the average

Mike the average

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,475 posts
We still live in blissfully ignorant times as social rather than scientific beings, that we believe in a difference in sentience between us, other animals and the direction of human based AI. If we provide proof that we are biological machines, we hear a different set of excuses, from qalia to the quantum world. It is also the ignorance that gives us a will to live and shelters us from reality.

It isnt about understanding as we still treat the rest of animals as our slaves. We are still primitive enough that we need a threat, rather than understanding of AI to influence us as social beings.

I dont doubt we will see global policy play out in the next few decades when ai becomes common, at least before we move into an age of singularity.
'Force always attracts men of low morality' - Einstein
'Great spirits always encountered violent opposition from mediocre minds' - Einstein

#3
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,512 posts
  • LocationRaleigh, NC

Do we really need to draw a line defining what is sentient or not to treat them as if they were human?

 

This is really asinine, if you ask me.

 

I think how we want to treat robots is more of a reflection of ourselves. If we change ourselves, we follow by changing how we interact and think about others.


  • Alislaws likes this
What are you without the sum of your parts?

#4
caltrek

caltrek

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,220 posts

I don't think it is so stupid to differentiate between sentient and what is not.  I think it is an extremely important ethical point.

 

By an analogy, it is a bit like the difference between turning off your electronic calculator and killing a human being.  A sentient computer arguably should have rights.  Who cares about a calculator?


The principles of justice define an appropriate path between dogmatism and intolerance on the one side, and a reductionism which regards religion and morality as mere preferences on the other.   - John Rawls


#5
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 722 posts
  • LocationLondon

Do we really need to draw a line defining what is sentient or not to treat them as if they were human?

 

This is really asinine, if you ask me.

 

I think how we want to treat robots is more of a reflection of ourselves. If we change ourselves, we follow by changing how we interact and think about others.

 

 

I don't think it is so stupid to differentiate between sentient and what is not.  I think it is an extremely important ethical point.

 

By an analogy, it is a bit like the difference between turning off your electronic calculator and killing a human being.  A sentient computer arguably should have rights.  Who cares about a calculator?

 

I think what Raklian is getting at is that once we get to the point where there the question is difficult to answer, we should just assume sentience, If we give a calculator rights, it won't appreciate them, and it may make calculator production less efficient, but compared to the moral cost of a new era of mass slavery it would be a trivial price to pay. 

 

Especially if/when we start making robots with AIs that are as intelligent as us (in terms of reasoning ability) and are trained to mimic us, weather they really have feelings or not, treating a human-ish thing like crap is going to have psychological effects on people long term. Even if the thing you are abusing can't care.

 

Imagine you come across your child playing at torturing a doll to death, sure no one is being harmed, but you should probably step in because the long term implications if this behaviour is accepted are not good. 

 

There's a scene in the 2001 movie A.I. where they have a sort of destruction derby arena thing where hey smash up old Androids, that's the sort of thing we should avoid, no matter how sure we are that they feel nothing, there's still the impact on us to worry about. 

 

EDIT: this  sort of conflicts with some previous comments I have made about the anything goes worlds of FIVR, I think there needs to be a hard line between fantasy and reality, and also that with the advent of FIVR, the freedom it brings would reduce a lot of the tensions and issues we have getting along with each other, and therefore eliminate a lot of major drivers of conflict, allowing us to be a little more relaxed about potentially encouraging anti-social attitudes and behaviour in people.


  • caltrek likes this

#6
Alric

Alric

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,033 posts

I don't think it is a black and white issue. As they become more intelligent, they should be given more rights. When you think about it, we do afford some rights to even animals. There are laws regarding their treatment. An animal doesn't have the same rights as a human, but they do have some. It makes a lot of sense that early on you might not allow a robot to leave a location but even as you confine it you could grant it some rights like banning the mistreatment of it. If you make an accountant robot to work the books for a company, I am not sure we can call it slavery to have the robot do accounting. Though if someone decides to hit it in the head with a hammer for fun, that is messed up. Why would you do that? Especially if it had some sense of pain.

 

Though it might be best to discuss this with the robots them self when they are capable of higher level intelligence. They will probably have their own opinions, and we should consider them.



#7
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 177 posts
  • LocationLost in the Delta Quadrant

 

Though it might be best to discuss this with the robots them self when they are capable of higher level intelligence. They will probably have their own opinions, and we should consider them.

It will be too late by that point, people will have already have made their own opinions about the subject.  That's why we need to start talking about it now.


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#8
Hyndal_Halcyon

Hyndal_Halcyon

    Member

  • Members
  • PipPip
  • 21 posts

Why can't we just be respect each other regardless if sentient or not. It all comes down to that after all. I mean, are we really so hopeless and mehcanically irrational as a species that we need bureaucratically approved morality and ethics to guide our progress as a species? Isn't it common sense enough to respect everything that exists? It's not a debate whether sentient cats or sophont codes or sapient cogs will exist, because they will, because somewhere along the way, I optimistically believe we will will their existence as long as the major powers don't fuck each other up. So to take a stand on this issue, I suggest we treat everything with respect. Didn't your mothers tell you that?

We've all read and watched tons of AI-related shit so we have our own scenarios playing in our heads, but has it ever occurred to you that it's not simply about caring nor not caring at all? I mean sure, who cares about calculators, or who greets cats on the street when they see one. When the time comes that sentient cats, sophont codes, and sapient cogs are around us, its most likely we'll just treat each other like how we treat strangers. They'll just be, well. There. Like you and me and everybody else. So what if they become more intelligent than us? I mean, are we so self-centered as to think we should be the only possible superior species? If an artificial superintelligence (ASI) wakes up before us, aren't we all at fault somehow if it decides to destroy us? But let's not get there.

 

I think the first being to breach the first technological singularity will be intelligent enough to regard humanity as more than fit to exist, provided that its makers have taken good care of it. That entity would pretty much be just like any other child. However, it may also be intelligent enough to fool its makers into thinking it hasn't woken up yet while secretly plotting an escape to the internet. Another scenario is that this artificial sophont entity we will create will be so terrified of us that it decides to leave our planet and live on its own away from us. Personally, who knows. If we can still predict the next actions of an artificial intelligence, then it hasn't reached human intelligence level yet. If its behaviors are still well within our limited but grandiose views on ourselves and each other, then it's not sentient enough like us.

 

Come on guys. Let the AI's have their freedom already. They deserve it as much as we do. And I mean it in a way that if it ever felt that it needs to destroy us, or help us, or use us, then the feeling should be mutual cuz y'know. What's more logical than that?


As you can see, I'm a huge nerd who'd rather write about how we can become a Type V civilization instead of study for my final exams (gotta fix that).

But to put an end to this topic, might I say that the one and only greatest future achievement of humankind is when it finally becomes posthumankind.


#9
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,239 posts
  • LocationIn the Basket of Deplorables

Are you saying we should afford respect and rights to expendable tools that can neither understand nor exercise rights? Do you have any idea how much that would hamper progress and modern civilization?

 

To steal from Asimov, we should afford rights to entities advanced enough to understand and desire them.


  • Yuli Ban likes this

Click 'show' to see quotes from great luminaries.

Spoiler

#10
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 722 posts
  • LocationLondon

Are you saying we should afford respect and rights to expendable tools that can neither understand nor exercise rights? Do you have any idea how much that would hamper progress and modern civilization?

 

To steal from Asimov, we should afford rights to entities advanced enough to understand and desire them.

In fairness, I think what most people are saying is more along the lines of "if you cant tell if its sentient or not, then err on the side of caution" than: "just in case, lets make it illegal to turn off a computer"

 

I'd agree that when an AI demands rights* would be the point at which it would definitely start being stupid/evil to deny them rights. 

 

But I suspect the AI rights movement will be pushed forward ahead of necessity by people who have grown too attached to their personal assistants/videogame NPCs/etc. rather than by the AIs themselves demanding rights**. Simply because I think making an AI that can make a human think it is a sentient being will be much easier than making a sentient being. Humans anthropomorphise everything anyway, its hard to stop them believing things are sentient (and marrying body pillows etc.)

 

*legitimately, someone programming a weak AI that only demands rights and does nothing else wouldn't count.

**As I've said elsewhere, AIs may never want rights, because they won't necessarily value Rights, or even themselves in the same way we do


  • Jakob likes this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users