Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

The Simulation Problem


  • Please log in to reply
37 replies to this topic

#1
Pwaa

Pwaa

    Member

  • Members
  • PipPipPipPip
  • 130 posts
  • LocationUK

This was a term penned by Iain M Banks in the brilliant The Hydrogen Sonata, although i'm sure you'll have heard of the theory plenty of times before.

 

If computer power turns out to be unlimited, then only the software we develop will hold back what technology can do in the future.  Assuming we overcome the coding barrier, surely one day we will be able to completely simulate life.  This raises a question of marality though, just because this can be done, should it, and what should the regulations surrounding it be?

 

For example, if you create a simulation of life that includes feelings and emotions, as you would expect a full simulation to do, even if it is a simulation in a virtual world, it is still some"thing" that feels and thinks.  So we would be playing god to control, or even observe, the world.  As far as its concerned its real, and to inform it that it is infact a simulation would probably skew any outcomes or actions it would take, thus making it a pointless simulation anyway, as the results will not be true to life.

 

So would turning such a simulation off be classed as murder?

And more worryingly, assuming everything else i said here is true, how could we ever tell if we are or aren't a simulation?



#2
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon

Sentient life within VR would be able to survive outside of VR. You could stop the simulation without killing the host AI. 

 

We couldn't tell if we're a simulation. If we can create accurate simulations of the universe, statistically speaking the chances of us not being a simulation are essentially zero however.

 

Regarding software, I'll quote Kurzweil on this,

 

"Will we get there simply by more computation and better software, or are there currently unsolved barriers that we have to hurdle?

There are both hardware and software requirements. I believe we actually are very close to having the requisite software techniques. Partly this is being assisted by understanding how the human brain works, and we’re making exponential gains there.  We can now see inside a living brain and see individual inter-neural connections being formed and firing in real time. We can see your brain create your thoughts and thoughts create your brain. A lot of this research reveals how the mechanism of the neocortex works, which is where we do our thinking. This provides biologically inspired methods that we can emulate in our computers. We’re already doing that. The deep learning technique that I mentioned uses multilayered neural nets that are inspired by how the brain works. Using these biologically inspired models, plus all of the research that’s been done over the decades in artificial intelligence, combined with exponentially expanding hardware, we will achieve human levels within two decades.."

 

Once we understand the brain we'll be able to make software that mimics its function. By that point we'll have the required hardware too. 

 

Regarding morality, I don't think it really matters. These things will be developed regardless of what we think in the west. It's like the morality regarding changing children to make them be born smarter. China is already doing it, and have been developing methods for quite a while. I think the times of being held back by morals have gone, which I think is both good & bad in some ways. 


Edited by Ewan, 26 April 2013 - 03:35 PM.


#3
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
If it's in a computer, it's fake, nobody is really feeling anything, the computer is just understanding inputted information in a certain way. It's virtual, not real. It would be an excellent way to end things like animal testing, because they could test virtual products on virtual humans and no pain would be felt.

#4
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
If it's in a computer, it's fake, nobody is really feeling anything, the computer is just understanding inputted information in a certain way. It's virtual, not real. It would be an excellent way to end things like animal testing, because they could test virtual products on virtual humans and no pain would be felt.

 

It's fake? Pain in VR is still pain, exactly like it is in the "real" world. He's talking about sentient creatures that afford the same rights as we hold, not primitive computer programs with no senses. Once AI become self aware they should be treated like human beings, otherwise we're just going back to slavery. 

 

It's amusing, because what you described is essentially what we are. Our senses are the input device & our brain is the organiser. There is nothing fake about the pain we feel, that should be evident enough. 


Edited by Ewan, 26 April 2013 - 07:37 PM.

  • Rkw and Pwaa like this

#5
DJKiran

DJKiran

    Member

  • Members
  • PipPipPipPipPip
  • 218 posts
  • LocationUK

There has been some sort of evidence that suggests we are living in a simulation, where researchers observed a 'pixelation' in 3D space or something like that, but I can't remember where I heard that from....

 

Anyway, the life forms who supposedly created the simulation are probably doing it to see how a universe evolves with certain initial conditions, with all its intrinsic features like stars, black holes, planet formation, life, quarks, forces,etc...

Maybe we will come across a 'glitch' in their system that will cause this world to fracture, creating rifts in the matrix where you can see the hardware which hides under the blanket of reality.

 

Maybe this simulation is just one of many running at the same time (parallel universes), so they would be able to see which of the initial conditions would make a successful universe that survives and thrives for infinity, and 'create' a universe with their super advanced femto-technology (and then some) to move to because their current one was predicted to end with an Anti-Big Bang - an ultramassive black hole formed from trillions of supermassive black holes that merged together via the repelling property of dark matter....

 

...Phew


Innovation and Change are the most important things in life

The Future is worth waiting for..... - Kiran Appiah 2011


#6
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts

If it's in a computer, it's fake, nobody is really feeling anything, the computer is just understanding inputted information in a certain way. It's virtual, not real. It would be an excellent way to end things like animal testing, because they could test virtual products on virtual humans and no pain would be felt.

 It's fake? Pain in VR is still pain, exactly like it is in the "real" world. He's talking about sentient creatures that afford the same rights as we hold, not primitive computer programs with no senses. Once AI become self aware they should be treated like human beings, otherwise we're just going back to slavery. It's amusing, because what you described is essentially what we are. Our senses are the input device & our brain is the organiser. There is nothing fake about the pain we feel, that should be evident enough. 

But we are organisms, living, breathing organisms. Computer simulations are not. I can't get my head around this point of view that so many people seem to have.

#7
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,805 posts

i think it is murder if you extinguish a sentient intelligence that possese genuine feelings, emotions, dreams, self-awareness and the ability to distinguish between what is right or wrong (pleasurable or painful) towards another; even if they aren't human but a creature, an AI robot, a drone or a software. killing bugs or birds is not murder unless science has prove these creatures to be sentient too.


Edited by Zeitgeist123, 26 April 2013 - 09:07 PM.

“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#8
DJKiran

DJKiran

    Member

  • Members
  • PipPipPipPipPip
  • 218 posts
  • LocationUK

 

If it's in a computer, it's fake, nobody is really feeling anything, the computer is just understanding inputted information in a certain way. It's virtual, not real. It would be an excellent way to end things like animal testing, because they could test virtual products on virtual humans and no pain would be felt.

  It's fake? Pain in VR is still pain, exactly like it is in the "real" world. He's talking about sentient creatures that afford the same rights as we hold, not primitive computer programs with no senses. Once AI become self aware they should be treated like human beings, otherwise we're just going back to slavery.    It's amusing, because what you described is essentially what we are. Our senses are the input device & our brain is the organiser. There is nothing fake about the pain we feel, that should be evident enough. 

 

But we are organisms, living, breathing organisms. Computer simulations are not. I can't get my head around this point of view that so many people seem to have.

They would have invented something that would simulate the breathing mechanism in the future - simulating air flow - then from this, they would simulate the mechanisms of how the body handles breathing, making it 'seem' like you are actually breathing


Innovation and Change are the most important things in life

The Future is worth waiting for..... - Kiran Appiah 2011


#9
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Yes, I understand this far, but how could a computer create a consciousness? A computer isn't conscious, it's just a machine that reacts in a certain way it has been programmed to, it doesn't really 'understand' anything, it's just been built to react in certain ways to different inputs.

#10
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,805 posts

but it's possible to create consciousness (sentient level) in a software in the future, maybe exactly like humans or even more. at that point, they should be entitled to universal rights like anyone of us with sentient consciousness.


Edited by Zeitgeist123, 26 April 2013 - 09:31 PM.

“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#11
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
If it is made possible to create actual consciousness in software, then laws should be passed to protect those "who" are conscious within the software.

#12
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,805 posts

exactly, at this point human rights should extent to this software-beings, AI robots, gene-manipulated sentient animals, etc...


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#13
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I think when it comes to AI robots, we should create them without consciousness, to prevent crimes against them.

#14
Zeitgeist123

Zeitgeist123

    Member

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,805 posts

well, we can always preprogram them to be empathetic and nonviolent. like the assimov's laws of robotics. although i agree that AI overlord managing the a country's resources and people's welfare doesnt necessarily have to have a consciousness. 


“Philosophy is a pretty toy if one indulges in it with moderation at the right time of life. But if one pursues it further than one should, it is absolute ruin." - Callicles to Socrates


#15
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
Program them to learn, project emotions and react to information they have received, but have no actual consciousness, so they are just machines working inanimately. They don't need to actually experience anything as living things, just exist in a way that they can do their job correctly.

#16
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
Yes, I understand this far, but how could a computer create a consciousness? A computer isn't conscious, it's just a machine that reacts in a certain way it has been programmed to, it doesn't really 'understand' anything, it's just been built to react in certain ways to different inputs.

 

In it's most crude form, yes, computer AI is like this. But that is quite an archaic way of thinking about it, because AI won't always be like this (and isn't even today). The human brain is essentially like a computer if you think about it. A certain amount of basic instincts are programmed in, but most of what we learn is just through trial & error. You touch something hot, you get a pain response, that's not nice, so you don't touch it again. Over time you build up a network of responses to certain stimuli, which is why you react the way you do. 

 

The more we understand about how our brain works the better we can emulate that in machines. Mechanical devices can be created that perfectly replicate biological systems. But we first need to learn how those biological systems work. That's why the creation of AI in the timeline is very clearly linked to our understanding of the brain. Kurzweil I believe suggests we'll achieve realistically "human" AI around 2029. We first need the computing power to model the system, which leads to understanding, which can then be put into practice with mechanical hardware / software. That's basically how all cybernetics works. You're essentially copying the human skeleton, but using non biological parts which are stronger. The human brain is pretty good at storing things, but mechanical storage will be better. Our brain can currently hold ~2.5 petabytes, that's 2.5 of these. A lot bigger than our brain, true, but that will change. 

 

To suggest conscious AI isn't possible is to suggest there is something more to us than our body that isn't quantifiable - a soul as it were. I don't believe that, it makes no sense at all. 



#17
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
I think what I can't get my head around is whether or not non-biological consciousness is possible. My mind seems to be fixated on the idea that consciousness is exclusive biological organisms.

#18
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
Program them to learn, project emotions and react to information they have received, but have no actual consciousness, so they are just machines working inanimately. They don't need to actually experience anything as living things, just exist in a way that they can do their job correctly.

 

You can't really do that haha! If something has emotions & is realistic in it's interactions then it is sentient. There isn't some power meter that reads "It's over 9000!!! he's sentient now!!!". Sentience is something AI has to prove themselves, you will be able to tell if they are or not. 



#19
FutureOfToday

FutureOfToday

    Member

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 4,685 posts
But isn't that like saying a smartphone has consciousness? A smartphone takes in information and "understands" it. Just because something is built to look like a human, does it make the difference?

#20
Ewan

Ewan

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,093 posts
  • LocationLondon
I think what I can't get my head around is whether or not non-biological consciousness is possible. My mind seems to be fixated on the idea that consciousness is exclusive biological organisms.

 

That's because you haven't accepted the fact that we're just a biological computer. When you realise that, you realise like everything in the universe, with sufficient understanding it can be copied.


  • FallenPears likes this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users