Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The Simulation Problem


  • Please log in to reply
38 replies to this topic

#21
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Well, it's that boundary between machinery and biology that gets me. I guess overall the main structure behind them is kind of similar, with the brain and everything.

#22
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
But isn't that like saying a smartphone has consciousness? A smartphone takes in information and "understands" it. Just because something is built to look like a human, does it make the difference?

 

Let's take sirri as an example. Cool program, but you can tell it's not human can't you? It's not "realistically" sentient, that's the key, you can't have a discussion with it. It can't fall in love with you, it doesn't have feelings. There's a good example of this in Ghost in the Shell: SAC. Episode 3 season 1 when they're investigating the Jeri love robot. That robot can mimic consciousness by regurgitating things it has "read" or seen in movies, but it can't actually "think". If you try to have a discussion with this kind of robot it will get lost & not know what to say, or start saying nonsense. 

 

When you can have a conversation with an AI on an intellectual level, and you cannot tell that it is an AI, then it has become sentient. 



#23
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

#24
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

 

Do things for itself, think for itself, be realistic in expressing these feelings to outside observers in a believable way. 



#25
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I think that machines built to do those things will have to be built in a way that is based heavily on the human brain and how it functions.

#26
Brohanne Jahms

Brohanne Jahms

    Member

  • Members
  • PipPipPipPipPipPip
  • 598 posts
I'm glad to say it's actually coming clearer to me now at last, lol! So when it's able to do things for itself, it's sentient, but when it can only do pre-programmed things it's not?

 

Just imagine finding out we're a simulation!

 

Seriously though, all you know is what has been programmed into your brain. What makes you any different than a self aware AI? Just because your hardware is biological?



#27
bee14ish

bee14ish

    Psionic Reality-Warping God

  • Members
  • PipPipPipPipPip
  • 370 posts
  • LocationEarth

Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.



#28
Brohanne Jahms

Brohanne Jahms

    Member

  • Members
  • PipPipPipPipPipPip
  • 598 posts
Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.

 

Either you're extremely mentally ill or just a bad troll.



#29
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.


Hey.  Stop reading.  The post is over.


#30
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

EDITED:

Connection was slow, apparently the server wanted to post this three times.


Edited by SG-1, 27 April 2013 - 03:49 AM.

Hey.  Stop reading.  The post is over.


#31
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas

EDITED


Edited by SG-1, 27 April 2013 - 03:49 AM.

Hey.  Stop reading.  The post is over.


#32
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.

 

It gets a bit more complicated because you could program a sentient robot to want to help humans. There are so many moral problems when it comes to AI lol... Should you program a creature to be a slave? 



#33
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Program it to "want" to serve humans in the same way that a PC "wants" to serve humans - without consciousness.

#34
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,838 posts

i think its ethical to create a robot slave as long as it does not have self-awareness, hopes and dreams, etc... i seriously believe that it simply isn't necessary for a very efficient robot or an AI overlord to be sentient. but robots that are programmed to be sentient should be treated as any human being. 


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#35
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Yes. Like in video games, it's ethical to run people over and shoot people, because they're virtual, but if they were sentient, it would be wrong.

#36
SG-1

SG-1

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,653 posts
  • LocationUS - Arkansas
I don't think androids will normally be sentient in the future, at least not androids that serve humans.

 

We don't want to have slave labor and we've had a heated debate about this a while back.  I think it is possible to have robots do all sorts of manual labor without being sentient.  It doesn't need it, if it was sentient it would want a better job lol.  If we denied it rights, well we don't want an Artilect War on our hands.

 

It gets a bit more complicated because you could program a sentient robot to want to help humans. There are so many moral problems when it comes to AI lol... Should you program a creature to be a slave? 

That was the argument.  Why would someone do that though?  It is too complicated.  We don't need them to be sentient.  Would it work?  Maybe.  If we can "program" them to want to live for nothing more than sweeping a floor and sleeping in a closet every second it isn't sweeping.

 

But we know that genetics do not control that much of our decision making.  Eventually they would want more.  Even if they wanted to be a janitor really badly, they would require cars and houses and entertainment.  Why on earth (which is limited in resources btw) would we want to waste so much on just a robotic janitor?

 

It also isn't morally right to control people's lifestyle. That is not right no matter how you spin it.  If it is capable of thinking sentiently and we force it to do anything, we are wrong.


Hey.  Stop reading.  The post is over.


#37
Rkw

Rkw

    Member

  • Members
  • PipPipPipPipPip
  • 470 posts

I suppose you could get around the morality question by allowing the simulated AI to continue even after it's simulated death? Join the real world, or experience it's very own version of "Heaven"? I don't think that it will happen though, like someone already said, amoral or not it will happen and gradually become accepted.

 

Question for FoT. Do you find it hard to grasp the idea that a sufficiently complex machine could house real "life", or do you not like that idea?

If in the future there was a movement where robots were literally protesting against enslavement and claiming they were alive, would you side with, or against them? 



#38
bee14ish

bee14ish

    Psionic Reality-Warping God

  • Members
  • PipPipPipPipPip
  • 370 posts
  • LocationEarth
Off topic: I'd torture these programs just to see them in pain. Since I can't do it in real life, it would be a good way to let off steam after a bad day.

 

Either you're extremely mentally ill or just a bad troll.

Maybe both.



#39
funkervogt

funkervogt

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,303 posts

This was a term penned by Iain M Banks in the brilliant The Hydrogen Sonata, although i'm sure you'll have heard of the theory plenty of times before.

 

If computer power turns out to be unlimited, then only the software we develop will hold back what technology can do in the future.  Assuming we overcome the coding barrier, surely one day we will be able to completely simulate life.  This raises a question of marality though, just because this can be done, should it, and what should the regulations surrounding it be?

 

For example, if you create a simulation of life that includes feelings and emotions, as you would expect a full simulation to do, even if it is a simulation in a virtual world, it is still some"thing" that feels and thinks.  So we would be playing god to control, or even observe, the world.  As far as its concerned its real, and to inform it that it is infact a simulation would probably skew any outcomes or actions it would take, thus making it a pointless simulation anyway, as the results will not be true to life.

 

So would turning such a simulation off be classed as murder?

And more worryingly, assuming everything else i said here is true, how could we ever tell if we are or aren't a simulation?

 

I have a potential answer: 

 

 

One handicap to doing "ancestor simulations" to understand past events better is that it would be unethical to create "virtual humans" who would suffer and die inside a computer. But here's a possible solution: have AIs and posthumans voluntarily stand-in for the virtual humans in the simulations. 
 
Here's how it would work: Assume that the year is 2219, and not 2019. You aren't actually "you"--you are either a posthuman lying in a Matrix pod or an AGI. In either case, you agreed to have your memories temporarily blocked and your personality temporarily altered so you could live a lifetime as your current "human" self inside a computer simulation of Earth, circa 2019. After careful thought, you agreed to bear the burden of suffering for one human lifetime, for the sake of supporting a realistic ancestor simulation, and because you thought the experience of being "human" would enrich your understanding of the universe and make you more grateful for what you have. After your virtual human character dies in the simulation, you will be revived in the real world of 2219, and have your old memories and personality traits restored. However, you will also retain the qualities of the virtual human you once were. 
 

 

This would solve the "suffering" problem since no virtual humans would need to be created. Only posthumans and AGIs who had the strength of character and selflessness to endure one human lifetime of suffering would be in the simulation. 

 

https://www.futureti...ughts/?p=266711






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users