Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The Simulation Problem


  • Please log in to reply
37 replies to this topic

#21
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Well, it's that boundary between machinery and biology that gets me. I guess overall the main structure behind them is kind of similar, with the brain and everything.

#22
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
But isn't that like saying a smartphone has consciousness? A smartphone takes in information and "understands" it. Just because something is built to look like a human, does it make the difference?

 

Let's take sirri as an example. Cool program, but you can tell it's not human can't you? It's not "realistically" sentient, that's the key, you can't have a discussion with it. It can't fall in love with you, it doesn't have feelings. There's a good example of this in Ghost in the Shell: SAC. Episode 3 season 1 when they're investigating the Jeri love robot. That robot can mimic consciousness by regurgitating things it has "read" or seen in movies, but it can't actually "think". If you try to have a discussion with this kind of robot it will get lost & not know what to say, or start saying nonsense. 

 

When you can have a conversation with an AI on an intellectual level, and you cannot tell that it is an AI, then it has become sentient. 



#23
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

#24
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

 

Do things for itself, think for itself, be realistic in expressing these feelings to outside observers in a believable way. 



#25
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I think that machines built to do those things will have to be built in a way that is based heavily on the human brain and how it functions.

#26
Brohanne Jahms

Brohanne Jahms

    Member

  • Members
  • PipPipPipPipPipPip
  • 598 posts
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

 

Just imagine finding out we're a simulation!

 

Seriously though, all you know is what has been programmed into your brain. What makes you any different than a self aware AI? Just because your hardware is biological?



#27
bee14ish

bee14ish

    Psionic Reality-Warping God

  • Members
  • PipPipPipPipPip
  • 370 posts
  • LocationEarth

Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.



#28
Brohanne Jahms

Brohanne Jahms

    Member

  • Members
  • PipPipPipPipPipPip
  • 598 posts
Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.

 

Either you're extremely mentally ill or just a bad troll.



#29
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.


  • Zeitgeist123 likes this

Hey.  Stop reading.  The post is over.


#30
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

EDITED:

Connection was slow, apparently the server wanted to post this three times.


Edited by SG-1, 27 April 2013 - 03:49 AM.

Hey.  Stop reading.  The post is over.


#31
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

EDITED


Edited by SG-1, 27 April 2013 - 03:49 AM.

Hey.  Stop reading.  The post is over.


#32
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.

 

It gets a bit more complicated because you could program a sentient robot to want to help humans. There are so many moral problems when it comes to AI lol... Should you program a creature to be a slave? 



#33
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Program it to "want" to serve humans in the same way that a PC "wants" to serve humans - without consciousness.

#34
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,805 posts

i think its ethical to create a robot slave as long as it does not have self-awareness, hopes and dreams, etc... i seriously believe that it simply isn't necessary for a very efficient robot or an AI overlord to be sentient. but robots that are programmed to be sentient should be treated as any human being. 


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#35
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Yes. Like in video games, it's ethical to run people over and shoot people, because they're virtual, but if they were sentient, it would be wrong.

#36
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas
I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.

 

It gets a bit more complicated because you could program a sentient robot to want to help humans. There are so many moral problems when it comes to AI lol... Should you program a creature to be a slave? 

That was the argument.  Why would someone do that though?  It is too complicated.  We don't need them to be sentient.  Would it work?  Maybe.  If we can "program" them to want to live for nothing more than sweeping a floor and sleeping in a closet every second it isn't sweeping.

 

But we know that genetics do not control that much of our decision making.  Eventually they would want more.  Even if they wanted to be a janitor really badly, they would require cars and houses and entertainment.  Why on earth (which is limited in resources btw) would we want to waste so much on just a robotic janitor?

 

It also isn't morally right to control people's lifestyle. That is not right no matter how you spin it.  If it is capable of thinking sentiently and we force it to do anything, we are wrong.


Hey.  Stop reading.  The post is over.


#37
Rkw

Rkw

    Member

  • Members
  • PipPipPipPipPip
  • 470 posts

I suppose you could get around the morality question by allowing the simulated AI to continue even after it's simulated death? Join the real world, or experience it's very own version of "Heaven"? I don't think that it will happen though, like someone already said, amoral or not it will happen and gradually become accepted.

 

Question for FoT. Do you find it hard to grasp the idea that a sufficiently complex machine could house real "life", or do you not like that idea?

If in the future there was a movement where robots were literally protesting against enslavement and claiming they were alive, would you side with, or against them? 



#38
bee14ish

bee14ish

    Psionic Reality-Warping God

  • Members
  • PipPipPipPipPip
  • 370 posts
  • LocationEarth
Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.

 

Either you're extremely mentally ill or just a bad troll.

Maybe both.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users