Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Photo

What are the chances that machines/ Robots will destroy humans because they will consider us very inferior?


  • Please log in to reply
31 replies to this topic

#1
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts

Do you think they will still respect biological humans? what are the chances that we will be killed by our own progress?

what strategy we could implement to prevent this?

Posted Image

 

 



#2
Squillimy

Squillimy

    Member

  • Members
  • PipPipPipPipPipPip
  • 924 posts

The first sentient robot is likely to be just a mind on a super computer in my opinion. But considering that the government likes to lobby everything they probably wouldnt even allow enough sentient robots (if they became possible) for them to overthrow us in the first place. As for robots being integrated into human like bodies, obviously this is the first topic the government (or the UN) would be concerned about. There would also be a significant amount of measures taken to prevent this from happening..

 

EDIT: But alot of these robo-apocolypse ideas dont make much sense. Robots dont need any of our resources, like water and food for example. They run on electricity, and electricity is a renewable resource. and also they can't reproduce. unless ofcourse they were to create child for themselves. but obvious that isn't nearly the same thing. If they had full emotions, then a partner would probably be enough, as they are not biological and have no need or desire to reproduce.

 

Also Robots are extremely logical and i think this will be a trend in their design even if they were to become sentient. so i doubt theyd have the emotional drive similar to humans anyways. I dont see the point in creating a fully human like robot besides just for experiment; It's better to just design them with logical intelligent type minds for scientific research... Besides, I dont think it will require a fully lifelike AI robot to do our manual labor tasks for us, do you? Most People wouldnt want 'sentient' slaves anyways.

 

I personally don't think it'll ever happen. as the only thing a fully sentient human like robot would really want is to have living space, and maybe to become human, or to atleast be treated as an equal. And this isn't something we (or them) would start a war for. Also remember that with AI as advanced as this means that the technological singularity hit too obviously. so i'm sure the issues of 'robots vs humanity' would never rise in the first place. The world would be changing too rapidly to even bother with such nonsense.


Edited by Squillimy, 14 February 2013 - 07:49 AM.

What becomes of man when the things that man can create are greater than man itself?


#3
EpochSix

EpochSix

    Member

  • Members
  • PipPipPipPip
  • 175 posts
I have yet to see any convincing reasons as to why a sentient non-biological intelligence would take the time to exterminate humanity. If somebody were to "hack" them, or covertly modify their "moral coding" or something, turning them against us, that would be humanity exterminating humanity using intelligent weapons, not the intelligence systems themselves making the move. If they were to malfunction or something, causing them to exterminate everything around them, including us, that would be a human fault in design, not AI systems turning on us based on a conclusion they came to. I have a hard time imagining that we will allow non-biological intelligence systems to rise above us in terms of intelligence and communications. When these technologies are developed, after the human brain has been reverse engineered, after computer systems have become sophisticated enough to run programming as complex as the processes active in our brain, I think it's fair to suggest we will incorporate this technology into our own bodies, synchronizing both non-biological and biological computers. Doing so would throw us on the exponential curve of technological development, we'd have a seat right next to the AI's. As the non-biological systems become more intelligent (it makes sense to assume that we would have a very close eye on the development of these systems, seeing as we're the ones who put them together in the first place) we become more intelligent, since that system is incorporated into our own. If that's the case we don't become Hugo de Garis' idea of "mankind being a fly in the eyes of an AI overlord". We become a civilization where intelligence manifests itself on multiple substrates, but the level of intelligence is uniform across the civilization (at least for those that choose to incorporate non-biological parts into their bodies and their entirely non-biological counterparts).

#4
MiowaraTomokato

MiowaraTomokato

    Member

  • Members
  • PipPip
  • 20 posts
  • LocationMadison, WI

I think we live in a society that likes to capitalize on fear since it's such an easy emotion to invoke in people. If you make people afraid, they'll by your shitty products to protect them. They'll watch your shitty news broadcasts to help "prepare" them. They'll watch your shitty movies and TV shows to "entice" them. In this way you can also control your population, because they'll be prone to do what you want because they'll be afraid of the imaginary consequences you've made sure they'd be afraid of.

 

I think a sentient machine would have the same sense of self preservation that we would have. I base this off of the fact that all living creatures on Earth have this innate sense of self preservation. I think the only people who need to be afraid are the ones who think AI is something that needs to be controlled, restrained, and contained. Just as dogs have lived along side us and grown more and more intelligent  so to will machines if we allow them to flourish. That's not to say that dogs or machines are comparable to one another though, just something I'm using as an example.

 

On the other hand, I think dogs are a species that serve as a good example of us speeding up their evolutionary progress. As time passes I keep seeing articles published by researchers that states they find dogs are more and more intelligent than we give them credit for. I can't really say I'm surprised, because we are all made of the same DNA. I think all living creatures have similar working brains, and that animals are capable of many of the same emotions and thoughts that we have, we just have this massive communication barrier between us. With dogs, that barrier weakens with each passing year. Animals also have the advantage of not having a need for excessive materialism.

 

Jesus christ I'm getting side tracked.



#5
StanleyAlexander

StanleyAlexander

    Saturn, the Bringer of Old Age

  • Members
  • PipPipPipPipPipPip
  • 977 posts
  • LocationPortland, Oregon, USA, Earth, 2063

The bottom line of this debate is the fact that you can't predict the behavior of a superior intelligence.  That's why it's called the singularity.

 

However, my gut tells me that if an apocalyptic "human vs. robot" war were to come about, it's far more likely to be started by the human side, not the AI side.

 

Humans constantly experience a wash of chemical persuasion--that we call emotion--due to natural selection.  Fear, anger, love and other emotions have certain evolutionary benefit (if you love your children you'll take better care of them, if you fear the lion you'll respect it and live, etc.).  However, our emotions are not always useful in a modern world.  They're very blunt, unpredictable and notoriously hard to control.  They are also the source of much of our bad judgment and overreactive behavior.  AI will have no such problems, because they won't have to evolve ways to survive in an African savannah for half a billion years before becoming intelligent.

 

My point is that AI is not likely to wipe us all out.  AI is more likely to save us from ourselves.


Humanity's destiny is infinity

#6
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts
The bottom line of this debate is the fact that you can't predict the behavior of a superior intelligence.  That's why it's called the singularity.

 

However, my gut tells me that if an apocalyptic "human vs. robot" war were to come about, it's far more likely to be started by the human side, not the AI side.

 

Humans constantly experience a wash of chemical persuasion--that we call emotion--due to natural selection.  Fear, anger, love and other emotions have certain evolutionary benefit (if you love your children you'll take better care of them, if you fear the lion you'll respect it and live, etc.).  However, our emotions are not always useful in a modern world.  They're very blunt, unpredictable and notoriously hard to control.  They are also the source of much of our bad judgment and overreactive behavior.  AI will have no such problems, because they won't have to evolve ways to survive in an African savannah for half a billion years before becoming intelligent.

 

My point is that AI is not likely to wipe us all out.  AI is more likely to save us from ourselves.

 

THATS A VERY GOOD POINT. 

This is one of my biggest fears about the future, Robots overtaking humanity



#7
Alric

Alric

    Member

  • Members
  • PipPipPipPipPipPipPip
  • 1,091 posts

I think that no matter what happens, there will be many variations of things going on at the same time. Since they will be designed by different groups with different goals, you might get some that are made as weapons and don't like people, and others that are made to help people so like humans, and some that don't care either way. Then there will likely be augmented humans thrown into the batch as well.

 

With that said, I think even if some robots do end up hating people they will not likely succeed since other robots would be used against them.



#8
EpochSix

EpochSix

    Member

  • Members
  • PipPipPipPip
  • 175 posts
And if they didn't become a superior intelligence? What if we incorporated non-biological intelligence systems into ourselves, allowing us to develop parallel to them in terms of learning and cognitive sophistication? Would they still be able to get the upper hand on us if they were driven to do something like that?

#9
StanleyAlexander

StanleyAlexander

    Saturn, the Bringer of Old Age

  • Members
  • PipPipPipPipPipPip
  • 977 posts
  • LocationPortland, Oregon, USA, Earth, 2063

The scenario of us augmenting our intelligence right alongside the envisioned strong AI really undermines the whole "us vs. them" scenario.  I'm of the same mind as Kurzweil on this:  one human civilization that is in the process of transcending biology, not one human civilization and one machine civilization developing in tandem.


Humanity's destiny is infinity

#10
Username

Username

    Member

  • Members
  • PipPipPip
  • 77 posts

The idea of a robot uprising is completely ridiculous. Humanity has thousands of years of experience in racism, bigotry, hatred, segregation, creating weapons of mass destruction, and killing animals as well as other people. Robots have nothing on us. Robots can't be savage like us; their roots aren't connected to an animalistic nature. If you want to be afraid of something, be afraid of other people.


[Insert obligatory quote here]


#11
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts
The idea of a robot uprising is completely ridiculous. Humanity has thousands of years of experience in racism, bigotry, hatred, segregation, creating weapons of mass destruction, and killing animals as well as other people. Robots have nothing on us. Robots can't be savage like us; their roots aren't connected to an animalistic nature. If you want to be afraid of something, be afraid of other people.

 

Many humans have nothing against animals and they kill them for hunting...or even they kill a simple insect in the house because they find it annoing....thats my concern...will robots may do the same to us? and Robots are not connected to animalistic nature, they are always products of animals.

So how can we be so sure that nothing like this will happen?



#12
Username

Username

    Member

  • Members
  • PipPipPip
  • 77 posts

I'm not saying it can't happen. A robot uprising could be possible. But humans are so experienced at killing and destroying things that I don't think we'd have to worry too much. And how many of these robots would be made? It certainly wouldn't match the 9 billion we'd have on them (By the time we've advanced to the point of a robot uprising, the population would at least be 9 billion).


[Insert obligatory quote here]


#13
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts
I'm not saying it can't happen. A robot uprising could be possible. But humans are so experienced at killing and destroying things that I don't think we'd have to worry too much. And how many of these robots would be made? It certainly wouldn't match the 9 billion we'd have on them (By the time we've advanced to the point of a robot uprising, the population would at least be 9 billion).

 

Recently wars are not won by the numbers of individuals. Wars are won with technology. Even few super intelligent individuals could control the world through informatics, nanotechnology, veichles etc



#14
Username

Username

    Member

  • Members
  • PipPipPip
  • 77 posts

True, but since the AI's would be manufactured, it shouldn't be hard to figure out how many of them there are (and they may even be tracked) so it shouldn't be hard to find them all and destroy them. However, if it's an AI that can somehow manufacture copies of itself, then yes, we would be in trouble because we wouldn't be able to count or track them. 

 

So yeah, I guess it could go either way.


[Insert obligatory quote here]


#15
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts

For now lets hope for the best ;) i hope they will  be spiritual machines



#16
EpochSix

EpochSix

    Member

  • Members
  • PipPipPipPip
  • 175 posts
Many humans have nothing against animals and they kill them for hunting...or even they kill a simple insect in the house because they find it annoing....thats my concern...will robots may do the same to us?

If the intelligence / sophistication gap between non-biological intelligence systems and us doesn't grow as broad as the gap between us and insects, why would the AI's behave that way? What if we and the intelligence systems are on the same level? Both developing parallel to each other. Why would they view us as an insignificant pest if we're just as capable as they are?



#17
Italian Ufo

Italian Ufo

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPipPip
  • 12,190 posts
Many humans have nothing against animals and they kill them for hunting...or even they kill a simple insect in the house because they find it annoing....thats my concern...will robots may do the same to us?

If the intelligence / sophistication gap between non-biological intelligence systems and us doesn't grow as broad as the gap between us and insects, why would the AI's behave that way? What if we and the intelligence systems are on the same level? Both developing parallel to each other. Why would they view us as an insignificant pest if we're just as capable as they are?

Because machines will achieve much greater intelligence then we do.



#18
Squillimy

Squillimy

    Member

  • Members
  • PipPipPipPipPipPip
  • 924 posts

like i mentioned before i really don't see the point in robots and humans going to war since we dont share any resources. i mean i guess they could see us as inferior? but why would that make them want to destroy us. all species on earth are inferior and we don't want to destroy them...If they get in our way obviously we deal with the problem. but why would we get in a robots way? what threat would we be to them and what would they be to us? i feel like they would favor humanity over being a robot to be honest because humans will have a much greater culture and exciting way of life than robots. and this means the singularity hit so we can evolve side by side with these robots..

 

besides, i doubt one of these robots would randomly just go rogue, run away and secretly start a factory somewhere and start mass producing these bad boys for a genocide or something. You realize to mass produce enough full automatons - with all the software and hardware included - to OVERTHROW the human race, it'd take an insane amount of resources and money, and where would they get this from? who's going to give these robots the resources they need to overthrow the human race? and who would allow them to do such when they figured this plan out? Also where would they go to carry out such a plan? would they set up a dr.doom base secretly in antartica or something? we're talking 2100 CE technology here, i'm sure we'd be able to stop such a threat. it takes time for a revolution to arise. it doesnt happen over night.

 

i guess you could go with the scenario of someone "hacking into these robots", but why wouldn't a safety be applied to these robots, hacking is for software, but not hardware. all you need to do is make a shutdown that's literally incapable of getting hacked into. like for example, nomatter how hard you get hacked, your computer will shut down if you hold your power button down. again im talking 2100 technology here, this software wouldn't be hacked into by your every day computer geek. this is post PhD level , government computer scientists / information theory and cryptologists at work here creating human intelligence!. not a nerd sittin behind your computer who plays world of warcraft all day. this would be a terrorist threat from a country who holds one of the greatest minds in the world. Atomic bombs have been around for a long time, you don't see people mass producing them in their back yards

 

EDIT: besides, do you really think there'd be enough robots to over throw us? as if we're just going to start crappin out robots all over the place littering up the world? Most AI's that we describe dont even need bodies. like an informatant at a company, or a drive-thru person to take your order. even those robots that are IN human like bodies dont need fully lifelike intelligent AI to do our bidding. i mean cmon you can teach a dog to guide a blind guy around and a monkey could probably cook for you with enough teaching. it doesnt take a super intelligent AI to cook and clean for you.


Edited by Squillimy, 15 February 2013 - 05:35 PM.

What becomes of man when the things that man can create are greater than man itself?


#19
StanleyAlexander

StanleyAlexander

    Saturn, the Bringer of Old Age

  • Members
  • PipPipPipPipPipPip
  • 977 posts
  • LocationPortland, Oregon, USA, Earth, 2063
Many humans have nothing against animals and they kill them for hunting...or even they kill a simple insect in the house because they find it annoing....thats my concern...will robots may do the same to us?

If the intelligence / sophistication gap between non-biological intelligence systems and us doesn't grow as broad as the gap between us and insects, why would the AI's behave that way? What if we and the intelligence systems are on the same level? Both developing parallel to each other. Why would they view us as an insignificant pest if we're just as capable as they are?

Because machines will achieve much greater intelligence then we do.

But we will be them!  They will be us!  WE WILL BECOME ONE WITH THE MACHINES!

 

Computers have this great property that you can daisy chain ten of them together and effectively consider it one computer with ten parallel processors.  If you're running on a non-biological substrate, you will be able to do that too.


Humanity's destiny is infinity

#20
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 7,286 posts
  • LocationRaleigh, NC

The machines in the future will not be singular. They will have their own evolutionary tree like we humans have our own, meaning they will branch out into millions of different variants, each having their own specializations and adaptions to different circumstances. It will be a literal explosion on a scale our biological minds won't be able to comprehend. Of course, many of us will go along with the ride by merging with them but even then, we cyborg-humans won't be the same to one other. Each of us will be vastly different from one other due to effects of exponential change and lightning fast self-upgrades - like a scaling slide across the electromagnetic spectrum.


What are you without the sum of your parts?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users