Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Robot rights

robot rights ai artificial intelligence consciousness ethics

  • Please log in to reply
47 replies to this topic

#1
Username

Username

    Member

  • Members
  • PipPipPip
  • 77 posts

Picture it: The most advanced AI in the world. This robot identifies itself as a female just because it wants to. It wasn't programmed for this gender identity, this is a choice the robot made all on her own. When she speaks you can't tell her apart from a human because of how emotional she is. Sometimes she puts her own interests before others'. Sometimes she overreacts. She can be stubborn at times, and defends her consciousness. This is a robot with a true, organic, personality with distinct character flaws. This personality was not programmed - she gained a personality through experience, just like humans. First off, would you believe that it had consciousness and sentience on a human level? Do you think this would be possible? Would you be sensitive toward this robot and address it as the gender it wants to identify with?

 

And what if this robot grew very close to a person and, in a tone that sounded like she will cry, says, "I don't really know if I am, or if I can prove anything about myself. But I want to be a person. I want more friends, I want to be part of a family. When I'm with my only friend I feel like I'm more than a machine. I can't explain it but I feel... safe. I feel alive. I feel home. Please, don't take that away from me. Please... that feeling is all I have." Now, you have to vote. Would you vote to give this robot citizenship and rights as a non-human person?


[Insert obligatory quote here]


#2
Casey

Casey

    Member

  • Members
  • PipPipPipPipPipPip
  • 572 posts

Yeah, I would. A person would have to be pretty stone cold to say 'Nah, you're just a robot' after that passionate a plea. I don't think there's any real needs to give rights to robots or machines before they develop emotions, but once they're 'human' enough to suffer or feel hurt at being mistreated, then they need to be protected just as all sentient creatures do.



#3
Voxelplox

Voxelplox

    Member

  • Members
  • PipPip
  • 18 posts
  • LocationNew York City

I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  


  • Italian Ufo, MarcZ and Practical Mind like this

#4
EVanimations

EVanimations

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,446 posts
  • LocationThe Eldritch Beyond

Sure, I don't see why not. If I can't think of any good reason not to do something, I'll do it.



#5
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,514 posts
  • LocationRaleigh, NC

Of course I will, too. I have the capacity for compassion and empathy. We can't really gamble with the guess that although they seem conscious, they themselves really aren't. This philosophical issue can be debated for a long time but giving these robots their rights shouldn't have to wait.

 

These robots didn't ask to be created the way they are, so we might as well treat them as we do with a fellow human being. We humans didn't ask to be born and have a brain. Just because we created them does not give us the right to do whatever we want with them.

 

They can be our greatest allies, or the foe that will ultimately wipe out mankind. We will need to choose wisely.


What are you without the sum of your parts?

#6
MarcZ

MarcZ

    Chief Flying Car Critic

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,241 posts
  • LocationCanada
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics



#7
kjaggard

kjaggard

    Artificer

  • Moderators
  • PipPipPipPipPipPipPipPip
  • 2,827 posts
  • Locationwhere fanciful imaginings and hard won knowledge meet to genesis the future.

I think your example goes farther than I need it to by a mile.

 

My cars name is Cricket, and she messes with me when I'm already late for work, and sometimes she gives a bit of effort more when I really need her to. I know it's not actually the case that it's an entity with will and intent, but meh what's it cost me to play a bit with relating to it as if it were a being.

 

As I get around to building some AI and sythetics I will likely treat them as pets more than anything. And should one decide on anything for itself or express itself then I will treat it like  person.

 

Hell I'll probably treat a few like people before that, just like when I play RPG games on a computer and walk up to an NPC and treat it differently than a stone pillar.

 

People seem to think the limits are when it starts to have feelings but a Robot Spock would still warrent that I respect it as an individual.


Live content within small means. Seek elegance rather than luxury, Grace over fashion and wealth over riches.
Listen to clouds and mountains, children and sages. Act bravely, think boldly.
Await occasions, never make haste. Find wonder and awe, by experiencing the everyday.

#8
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,514 posts
  • LocationRaleigh, NC
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.


What are you without the sum of your parts?

#9
MarcZ

MarcZ

    Chief Flying Car Critic

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,241 posts
  • LocationCanada
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.

 

Thus why I said most realistic response "I see here", being that it is my opinion based on my experiences and my beliefs about how humans will see robots.



#10
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

Yeah I would.  I think it would be awesome to be born an AI.  You would basically be guaranteed immortality.

 

I wonder how the first conscious AI will react to its existence?  The best way I can see to creating a full sentient AI is to have them grow up in virtual reality with dedicated people to parent it.  These people will need to exist in the real world too.  Then they will be told that they are an AI when they become old enough to understand that.

 

I think people misunderstand what it means to be conscious.  Now, there may be forms of consciousness that we don't know about; but, it seems to me that for a robot to think for itself and be truly "alive" it needs experience, it cannot be coded into it.  It would probably be a perfect simulation of the brain, and brains aren't programmed with knowledge.  So they would need to learn.  They would learn fast, with more neurons to use in the neocortex and instant recall ability. 

 

So how would you have reacted if someone broke the news to you that you were living in a simulation as a child and you are really a product of machinery?  Even if they assured you that you are really alive, what would you feel?  The real world would be a much different place than a simulated environment, we are talking about the first AI so population of whatever world you put them in will either be the best AI at the time, or other humans (or both).  And the graphics may not be entirely convincing, so when they walk into the real world they may be pleasantly surprised that everything looks so awesome.

 

I would be a little sad, and honored at the same time.

I think it will be a necessity that the parents exist in the real world, along with the friends the AI has.  So there would have to be dedicated players.  I don't think anyone can simulate or pretend to be a child though, it would be so tricky.  It is a very delicate task creating the first AI.


Hey.  Stop reading.  The post is over.


#11
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,514 posts
  • LocationRaleigh, NC
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.

 

Thus why I said most realistic response "I see here", being that it is my opinion based on my experiences and my beliefs about how humans will see robots.

 

True.

 

I'd like to add it is very difficult to project into the future how things will turn out. Often times, our predictions miss the mark. In this case, we'll see how it all turns out.


What are you without the sum of your parts?

#12
kjaggard

kjaggard

    Artificer

  • Moderators
  • PipPipPipPipPipPipPipPip
  • 2,827 posts
  • Locationwhere fanciful imaginings and hard won knowledge meet to genesis the future.

as to why create conscious robotics?

 

Because when I'm going to a convention or something and I take an assistant with me or medical aid, I don't want to walk around holding a remote control to tell it how to navigate a sidewalk curb or traffic lights, so it must be able to navigate, handle obstacles understand contextual rules and know how to follow me. If I'm going to be in a room with others, I kind of want it not to confuse my mum and me with each other, so it needs to recognise individuals. And If I am talking to somebody else I don't want them to think I'm talking to them, and when I am talking about them but not to them I don't want to have them respond (I was fixing R2... Beeep bwoop?... No I wasn't talking to you shush.) So being able to have understanding of the situation and grasp of basic rules of interaction.

 

Being able to identify others distinct from yourself, understanding context and rhetorical questions, knowing when it's being spoken to, knowing societal rules and how to handle itself in public, and likely having enough tact to not cause problems in public. These are all things that require complex reasoning, and they are also things that you would very likely want a artificial entity to have. If it has even just a little of these things I can't rightly say that it's not an independant mind. Just because it's not just like us doesn't mean its just scrap. Because you can try as hard as you like but you can't convince me you are any less a machine using taught programming, yes your are made of meat and consume other lifeforms to sustain you but that's more a design difference than a seperating qualification between you.


  • Raklian and EpochSix like this
Live content within small means. Seek elegance rather than luxury, Grace over fashion and wealth over riches.
Listen to clouds and mountains, children and sages. Act bravely, think boldly.
Await occasions, never make haste. Find wonder and awe, by experiencing the everyday.

#13
Voxelplox

Voxelplox

    Member

  • Members
  • PipPip
  • 18 posts
  • LocationNew York City
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.

Well why make a conscious robot? It will then be the same as a human meaning it may not want to work, or want more from employer. Who would want a robot that has feelings? I wouldn't. I'm sure universities, and research labs will have them as tests, but I don't see the practical reason why they would be in the public.



#14
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,514 posts
  • LocationRaleigh, NC
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.

Well why make a conscious robot? It will then be the same as a human meaning it may not want to work, or want more from employer. Who would want a robot that has feelings? I wouldn't. I'm sure universities, and research labs will have them as tests, but I don't see the practical reason why they would be in the public.

 

Definitely not me as I don't have the skill anyway, but I am almost certain that there is somebody somewhere in the world who wants to create a sentient artifical mind. How are you going to stop him or her? You can't.


Edited by Raklian, 28 January 2013 - 04:02 AM.

What are you without the sum of your parts?

#15
Voxelplox

Voxelplox

    Member

  • Members
  • PipPip
  • 18 posts
  • LocationNew York City
I don't see why we would make a robot that has consciousness.. there is no need to do that, other then a lab experiment. Corporations will sell robots to do cleaning, work, etc, but I don't see a need or why there would be a demand for a robot that can think for itself..  

 

I see this as the most realistic response I see here, sure I can see conscious robots but those will probably be restricted to novelties of universities and other research labs, I already have enough time dealing with some people's personalities to start bothering about robotic ones and their rights... Asimov's set of laws pretty much sum up how I see robots: http://en.wikipedia....aws_of_Robotics

 

It is impossible for you to be certain about this.

Well why make a conscious robot? It will then be the same as a human meaning it may not want to work, or want more from employer. Who would want a robot that has feelings? I wouldn't. I'm sure universities, and research labs will have them as tests, but I don't see the practical reason why they would be in the public.

 

Definitely not me as I don't have the skill anyway, but I am almost certain that there is somebody somewhere in the world who wants to create a sentient artifical mind. How are you going to stop him or her? You can't.

Its fine if they do.. but I don't see why corporations who would make these would really make them conscious.. that would just be a higher cost for no point. But we don't know, its all speculative anyway. I think the next hundred years, robots will just be used in work, and to replace humans at work. 



#16
EpochSix

EpochSix

    Member

  • Members
  • PipPipPipPip
  • 175 posts

If artificial intelligence reaches a certain type of sophistication in which it has begun to take on the features of an intelligent sentient being with feelings of empathy, compassion, and wonder, I would hope we treat it as such. I think that as a species we're slowly starting to realize, through the development of new information technologies and artificial intelligence systems, that intelligence is not unique to biology. We're realizing that intelligence is not some kind of organic trait, it's a result of immense complexity in matter, it's a result of our brain's architecture, and it's something that we're seeing traces of in our most complex technologies like the internet and search engines. It doesn't matter what substrate intelligence arises in, whether it's a non-biological system or an organic system, consciousness is consciousness.

 

However, that being said, I have my doubts that when artificial intelligence systems reach that kind of sophistication they will have the same personality traits as us like empathy, compassion, and wonder, unless they are designed to feel that sort of emotion. When I think about why humans have developed these kinds of personality traits I imagine it has a lot to do with our mortality. Natural evolution favours organisms that nurture each other, our development of empathetic behaviour and community is one of the things that ensured our survival. We know we will die one day, that our time on this planet with each other is limited, we know that when we're hit with something that it hurts us, we know what it feels like to be ill or suffering from a disease, we all share the common experience of suffering in various ways, this is not something artificial intelligence systems will have. They weren't raised by a labouring mother, nurtured into intelligence / physical health by a family, and they don't have a history of suffering and conflict. They have an indefinite life span and they were born into intelligence and have no barriers to knowledge, there's no concept of accomplishment, no history of triumph / failure / suffering, nothing to fuel some sort of ego. Why would an immortal intelligence system with seemingly boundless access to information and knowledge and no physical pain or suffering demand any sort of respect, equality, or citizenship?


Edited by EpochSix, 28 January 2013 - 04:03 AM.

  • Raklian, StanleyAlexander and Antevorta like this

#17
kjaggard

kjaggard

    Artificer

  • Moderators
  • PipPipPipPipPipPipPipPip
  • 2,827 posts
  • Locationwhere fanciful imaginings and hard won knowledge meet to genesis the future.

It's not really about want, so much. You don't set out to say "I want to make a lifeform that has free will." You make a unit designed for set circumstances. If all it does is build toys and never sees a person then no it's not likely to be conscious or need think. But the more it must deal with and interact with Humans and animals the more complex it's thinking and reasoning will have to be. Even the dish collector in a cafe lumbering around must understand when it's in danger of treading on somebody and thus must keep track of and follow peoples actions. I has to know not to take your meal while you are eating it. and be able to identify your intent if you bring a dish to it.

 

In other words it has to have some rudimentary understanding. and in many cases dogs barely understand what is going on, and these service robotic systems will have more than that level of understanding.


  • EpochSix likes this
Live content within small means. Seek elegance rather than luxury, Grace over fashion and wealth over riches.
Listen to clouds and mountains, children and sages. Act bravely, think boldly.
Await occasions, never make haste. Find wonder and awe, by experiencing the everyday.

#18
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas
as to why create conscious robotics?

 

Because when I'm going to a convention or something and I take an assistant with me or medical aid, I don't want to walk around holding a remote control to tell it how to navigate a sidewalk curb or traffic lights, so it must be able to navigate, handle obstacles understand contextual rules and know how to follow me. If I'm going to be in a room with others, I kind of want it not to confuse my mum and me with each other, so it needs to recognise individuals. And If I am talking to somebody else I don't want them to think I'm talking to them, and when I am talking about them but not to them I don't want to have them respond (I was fixing R2... Beeep bwoop?... No I wasn't talking to you shush.) So being able to have understanding of the situation and grasp of basic rules of interaction.

 

Being able to identify others distinct from yourself, understanding context and rhetorical questions, knowing when it's being spoken to, knowing societal rules and how to handle itself in public, and likely having enough tact to not cause problems in public. These are all things that require complex reasoning, and they are also things that you would very likely want a artificial entity to have. If it has even just a little of these things I can't rightly say that it's not an independant mind. Just because it's not just like us doesn't mean its just scrap. Because you can try as hard as you like but you can't convince me you are any less a machine using taught programming, yes your are made of meat and consume other lifeforms to sustain you but that's more a design difference than a seperating qualification between you.

Yeah but why the heck would a sentient being want to serve anyone?  It will be useless, they will want jobs or just have fun.  They don't need to slave for anyone.  That is why its useless, because you can't sell people as property and they aren't going to do anyone's dishes for no reason.

 

If artificial intelligence reaches a certain type of sophistication in which it has begun to take on the features of an intelligent sentient being with feelings of empathy, compassion, and wonder, I would hope we treat it as such. I think that as a species we're slowly starting to realize, through the development of new information technologies and artificial intelligence systems, that intelligence is not unique to biology. We're realizing that intelligence is not some kind of organic trait, it's a result of immense complexity in matter, it's a result of our brain's architecture, and it's something that we're seeing traces of in our most complex technologies like the internet and search engines. It doesn't matter what substrate intelligence arises in, whether it's a non-biological system or an organic system, consciousness is consciousness.

 

However, that being said, I have my doubts that when artificial intelligence systems reach that kind of sophistication they will have the same personality traits as us like empathy, compassion, and wonder, unless they are designed to feel that sort of emotion. When I think about why humans have developed these kinds of personality traits I imagine it has a lot to do with our mortality. Natural evolution favours organisms that nurture each other, our development of empathetic behaviour and community is one of the things that ensured our survival. We know we will die one day, that our time on this planet with each other is limited, we know that when we're hit with something that it hurts us, we know what it feels like to be ill or suffering from a disease, we all share the common experience of suffering in various ways, this is not something artificial intelligence systems will have. They weren't raised by a labouring mother, nurtured into intelligence / physical health by a family, and they don't have a history of suffering and conflict. They have an indefinite life span and they were born into intelligence and have no barriers to knowledge, there's no concept of accomplishment, no history of triumph / failure / suffering, nothing to fuel some sort of ego. Why would an immortal intelligence system with seemingly boundless access to information and knowledge and no physical pain or suffering demand any sort of respect, equality, or citizenship?

I don't know if we can program consciousness, do you have any other idea of how to create true intelligence using anything other than the human brain and growing up?  That is why I think we will only have conscious AI through simulating the brain, and then it will need to learn.  If we let it learn we would obviously devise complex ways of making it as close to human as possible so it grows up actually capable of being intelligent. Then it would have those traits.


Hey.  Stop reading.  The post is over.


#19
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

*Sorry for the double post.

It's not really about want, so much. You don't set out to say "I want to make a lifeform that has free will." You make a unit designed for set circumstances. If all it does is build toys and never sees a person then no it's not likely to be conscious or need think. But the more it must deal with and interact with Humans and animals the more complex it's thinking and reasoning will have to be. Even the dish collector in a cafe lumbering around must understand when it's in danger of treading on somebody and thus must keep track of and follow peoples actions. I has to know not to take your meal while you are eating it. and be able to identify your intent if you bring a dish to it.

 

In other words it has to have some rudimentary understanding. and in many cases dogs barely understand what is going on, and these service robotic systems will have more than that level of understanding.

Yeah well I would argue we can get close enough to intelligent so a robot can do a specific task (butler or garbage man), but it won't be conscious.

 

Consciousness needs to ability to feel emotions on a human level, that is, more than just understanding, but feeling and it needs the ability to create and imagine.  Create truly original ideas, that did not come from a line of code.  We can make AI that write stories or build but are not conscious.


Hey.  Stop reading.  The post is over.


#20
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,514 posts
  • LocationRaleigh, NC
If artificial intelligence reaches a certain type of sophistication in which it has begun to take on the features of an intelligent sentient being with feelings of empathy, compassion, and wonder, I would hope we treat it as such. I think that as a species we're slowly starting to realize, through the development of new information technologies and artificial intelligence systems, that intelligence is not unique to biology. We're realizing that intelligence is not some kind of organic trait, it's a result of immense complexity in matter, it's a result of our brain's architecture, and it's something that we're seeing traces of in our most complex technologies like the internet and search engines. It doesn't matter what substrate intelligence arises in, whether it's a non-biological system or an organic system, consciousness is consciousness.

 

However, that being said, I have my doubts that when artificial intelligence systems reach that kind of sophistication they will have the same personality traits as us like empathy, compassion, and wonder, unless they are designed to feel that sort of emotion. When I think about why humans have developed these kinds of personality traits I imagine it has a lot to do with our mortality. Natural evolution favours organisms that nurture each other, our development of empathetic behaviour and community is one of the things that ensured our survival. We know we will die one day, that our time on this planet with each other is limited, we know that when we're hit with something that it hurts us, we know what it feels like to be ill or suffering from a disease, we all share the common experience of suffering in various ways, this is not something artificial intelligence systems will have. They weren't raised by a labouring mother, nurtured into intelligence / physical health by a family, and they don't have a history of suffering and conflict. They have an indefinite life span and they were born into intelligence and have no barriers to knowledge, there's no concept of accomplishment, no history of triumph / failure / suffering, nothing to fuel some sort of ego. Why would an immortal intelligence system with seemingly boundless access to information and knowledge and no physical pain or suffering demand any sort of respect, equality, or citizenship?

 

I suspect the very first intelligence we will succeed in creating will be somewhat limited as opposed to the immortal intelligence you described. It is likely this crude intelligence will share characteristics with that of a human mind since we're presently working so hard to understand the human mind as well as other mammalian ones. After several breakthroughs, the artifical mind will have pathways similar to a human mind because it's all we can understand at this point on how to design and create a conscious intelligence. So, there is come chance it may show empathy. Of course, I'm just guessing.


What are you without the sum of your parts?





Also tagged with one or more of these keywords: robot, rights, ai, artificial intelligence, consciousness, ethics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users