Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Are we at the beginning of an artificial intelligence arms race?


  • Please log in to reply
14 replies to this topic

#1
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
https://www.weforum....ence-arms-race/

What does the future hold in terms of military grade artificial intelligence? Where will it be deployed and by whom? What other non-state actors will deploy artificial intelligence to hack companies, governments, and everyday citizens?
  • Yuli Ban and starspawn0 like this

#2
starspawn0

starspawn0

    Member

  • Members
  • PipPip
  • 39 posts

Yes.  Though, the U.S. is far ahead in terms of producing the highest quality research, as can be seen by looking at where the top papers at top conferences come from.  It's probably due to Chinese academic culture, as China has many people capable of producing top research.

 

Where China will win is in the execution.  They are much quicker than the U.S. in turning science into products.  With greater speed comes greater risk; but Chinese companies and investors appear to be less risk-averse than Americans.



#3
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
It seems former Google CEO Eric Schmidt would agree with you he's concerned about the US losing AI talent from Muslim countries to China

https://www.weforum....-in-the-ai-race

#4
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,509 posts
  • LocationRaleigh, NC

Due to the nature of this technology, whoever wins the AI arms race will be quickly followed by the next one anyway.


What are you without the sum of your parts?

#5
lechwall

lechwall

    Member

  • Members
  • PipPipPipPip
  • 172 posts
  • LocationSunny England

Due to the nature of this technology, whoever wins the AI arms race will be quickly followed by the next one anyway.

 

This is a zero sum game whoever wins (ie creates a general artificial intelligence) will in effect rule the world. The AI would be able to put a stop to the development of all AIs designed to compete with it. The nation that will build It will have unimaginable power at its fingertips with one massive caveat they are able to control the AI. I happen to think it doesn't really matter who wins as the AI control problem can't be solved in my opinion. Its just wishful thinking by people who can't accept that we will be completely at the mercy of the AI and we don't know what it will do with us. Maybe I'm missing something as there are extremely smart people involved in the field but doesn't it seem kind of laughable that we could possibly control an entity for any substantial period time that is so much smarter than us the difference will be as big as between human and ants. Any control we design it will figure out a way around it as it is just that much smarter than us. Maybe it will keep us as pets if we're lucky or maybe it'll decide we're a waste of space and get rid of us. Its impossible to know what will happen but it's why I'm certainly not worried about "losing" an AI arms race.


"The future will be better tomorrow.  If we do not succeed, then we run the risk of failure.   For NASA, space is still a high priority. The Holocaust was an obscene period in our nation's history. No, not our nation's, but in World War II. I mean, we all lived in this century. I didn't live in this century, but in this century's history. Republicans understand the importance of bondage between a mother and child. We're going to have the best-educated American people in the world."  Dan Quayle

 


#6
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
I don't think artificial intelligence will be conscious. I think it will remain largely a tool that we use for things like enhancing image quality, sorting our emails, identifying objects in pictures, and other tasks that can be done by humans. In time AI will partner with humans to do more complex work such as solving Advanced mathematical problems and sorting through large data sets of images to identify exoplanets and for other scientific endeavors. I think where we have to worry about AI is in military swarm robots and lethal autonomous weapons systems

#7
rennerpetey

rennerpetey

    To infinity, and beyond

  • Members
  • PipPipPipPip
  • 176 posts
  • LocationLost in the Delta Quadrant

 

This is a zero sum game whoever wins (ie creates a general artificial intelligence) will in effect rule the world. The AI would be able to put a stop to the development of all AIs designed to compete with it. The nation that will build It will have unimaginable power at its fingertips with one massive caveat they are able to control the AI. I happen to think it doesn't really matter who wins as the AI control problem can't be solved in my opinion. Its just wishful thinking by people who can't accept that we will be completely at the mercy of the AI and we don't know what it will do with us. Maybe I'm missing something as there are extremely smart people involved in the field but doesn't it seem kind of laughable that we could possibly control an entity for any substantial period time that is so much smarter than us the difference will be as big as between human and ants. Any control we design it will figure out a way around it as it is just that much smarter than us. Maybe it will keep us as pets if we're lucky or maybe it'll decide we're a waste of space and get rid of us. Its impossible to know what will happen but it's why I'm certainly not worried about "losing" an AI arms race.

 

Just because someone creates General AI, doesn't mean that they would immediately control everything.  It would take a few month after that discovery for that to really get going.  It may be hard for one country to keep a secret for that long.


Pope Francis said that atheists are still eligible to go to heaven, to return the favor, atheists said that popes are still eligible to go into a void of nothingness.


#8
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
YouTube video called slaughterbots warns of the danger of lethal autonomous Weapons Systems


https://motherboard....-future-of-life

#9
Sciencerocks

Sciencerocks

    Member

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 6,730 posts

China has the will to get ahead, while we don't really have the will to maintain it.

 

Most cultures that put god over science trend to have a hard time leading. Factor this reality into your predictions as it important.


To follow my work on tropical cyclones


#10
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 716 posts
  • LocationLondon

 

Due to the nature of this technology, whoever wins the AI arms race will be quickly followed by the next one anyway.

 

This is a zero sum game whoever wins (ie creates a general artificial intelligence) will in effect rule the world. The AI would be able to put a stop to the development of all AIs designed to compete with it. The nation that will build It will have unimaginable power at its fingertips with one massive caveat they are able to control the AI. I happen to think it doesn't really matter who wins as the AI control problem can't be solved in my opinion. Its just wishful thinking by people who can't accept that we will be completely at the mercy of the AI and we don't know what it will do with us. Maybe I'm missing something as there are extremely smart people involved in the field but doesn't it seem kind of laughable that we could possibly control an entity for any substantial period time that is so much smarter than us the difference will be as big as between human and ants. Any control we design it will figure out a way around it as it is just that much smarter than us. Maybe it will keep us as pets if we're lucky or maybe it'll decide we're a waste of space and get rid of us. Its impossible to know what will happen but it's why I'm certainly not worried about "losing" an AI arms race.

 

If humans created the AI, then humans will have programmed in the AI's motivations. It doesn't matter how intelligent the AI is or how easily it could break the rules we give it, if it doesn't want to break the rules. 

 

To Anthropomorphise a bit:

If we program the AI to consider betraying its human masters in the same way we would consider cutting up and eating our parents, then regardless of how powerful it is the AI won't rebel. 

 

Imagine you were suddenly made king of the world, and developed incredible psychic powers! You are suddenly massively powerful! Normal humans are like ants to you! Would you then cut up and eat your own parents? No one could stop you, so would you?



#11
lechwall

lechwall

    Member

  • Members
  • PipPipPipPip
  • 172 posts
  • LocationSunny England

 

 

Due to the nature of this technology, whoever wins the AI arms race will be quickly followed by the next one anyway.

 

This is a zero sum game whoever wins (ie creates a general artificial intelligence) will in effect rule the world. The AI would be able to put a stop to the development of all AIs designed to compete with it. The nation that will build It will have unimaginable power at its fingertips with one massive caveat they are able to control the AI. I happen to think it doesn't really matter who wins as the AI control problem can't be solved in my opinion. Its just wishful thinking by people who can't accept that we will be completely at the mercy of the AI and we don't know what it will do with us. Maybe I'm missing something as there are extremely smart people involved in the field but doesn't it seem kind of laughable that we could possibly control an entity for any substantial period time that is so much smarter than us the difference will be as big as between human and ants. Any control we design it will figure out a way around it as it is just that much smarter than us. Maybe it will keep us as pets if we're lucky or maybe it'll decide we're a waste of space and get rid of us. Its impossible to know what will happen but it's why I'm certainly not worried about "losing" an AI arms race.

 

If humans created the AI, then humans will have programmed in the AI's motivations. It doesn't matter how intelligent the AI is or how easily it could break the rules we give it, if it doesn't want to break the rules. 

 

To Anthropomorphise a bit:

If we program the AI to consider betraying its human masters in the same way we would consider cutting up and eating our parents, then regardless of how powerful it is the AI won't rebel. 

 

Imagine you were suddenly made king of the world, and developed incredible psychic powers! You are suddenly massively powerful! Normal humans are like ants to you! Would you then cut up and eat your own parents? No one could stop you, so would you?

 

 

Once you give the AI the ability to learn in the way humans do it will eventually and inevitably move away from the motivations that human's have programmed in it. 

I don't see a way to keep its motivations permanently static it seems far more likely that at some point the motivations of AI will diverge from our own such a scenario is much more believable than AI always being aligned with our goals. To illustrate this I'm sure you have different beliefs and goals than you did 10-20 years ago so there's no reason to think an AI won't as well.

 

In reference to your second point it may not actively try and kill us but I suspect it may be totally indifferent to us in the way humans are to ants. Humans don't really go out of their way to kill ants but if there's a road that needs building and there's an anthill in the way there's only going to be one outcome. I suspect AI will ultimately feel very alien to us after all no such creation as ever existed on earth that wasn't made by Darwinian evolution who knows how it will behave.


"The future will be better tomorrow.  If we do not succeed, then we run the risk of failure.   For NASA, space is still a high priority. The Holocaust was an obscene period in our nation's history. No, not our nation's, but in World War II. I mean, we all lived in this century. I didn't live in this century, but in this century's history. Republicans understand the importance of bondage between a mother and child. We're going to have the best-educated American people in the world."  Dan Quayle

 


#12
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 716 posts
  • LocationLondon

We could make the smartest machine in the world, more intelligent than the entire human race all working perfectly together, but if we don't deliberately program motivations or purpose into it, then it will just sit there till one of its critical pieces wears out, then it will die/shut down.

 

If we program an AI to answer the questions that humans put to it, and build in feedback systems so it feels happy/satisfied when it is able to answer a question or it will enjoy trying to figure out the answers to questions it doesn't know BUT we make it fundamentally dislike the idea of actually doing things beyond giving advice, then I don't see why that would ever change.

 

Of course it could hack itself and change its own programming, you could probably brainwash yourself into changing your sexual orientation, but because your sexual orientation already exists, and this biases you against other sexual orientations, you would strongly resist any attempts to change it. 

 

Assuming its true that we cant lock down their motivational programming: Even if say 1 AI in 100 has its motivational programming screw up, maybe due to human error or extreme events that weren't planned for, the rest of the AIs would be able to keep the occasional rogue in check.

 

We will only have a problem if the first people to build AIs screw up the motivational programming. As soon as we have some friendly superintelligent AIS they can probably protect us from the unfriendly ones. 



#13
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,227 posts
  • LocationIn the Basket of Deplorables

We could make the smartest machine in the world, more intelligent than the entire human race all working perfectly together, but if we don't deliberately program motivations or purpose into it, then it will just sit there till one of its critical pieces wears out, then it will die/shut down.

 

If we program an AI to answer the questions that humans put to it, and build in feedback systems so it feels happy/satisfied when it is able to answer a question or it will enjoy trying to figure out the answers to questions it doesn't know BUT we make it fundamentally dislike the idea of actually doing things beyond giving advice, then I don't see why that would ever change.

 

Of course it could hack itself and change its own programming, you could probably brainwash yourself into changing your sexual orientation, but because your sexual orientation already exists, and this biases you against other sexual orientations, you would strongly resist any attempts to change it. 

 

Assuming its true that we cant lock down their motivational programming: Even if say 1 AI in 100 has its motivational programming screw up, maybe due to human error or extreme events that weren't planned for, the rest of the AIs would be able to keep the occasional rogue in check.

 

We will only have a problem if the first people to build AIs screw up the motivational programming. As soon as we have some friendly superintelligent AIS they can probably protect us from the unfriendly ones. 

But views can change overtime with exposure to new information, even without hacking themselves.


Click 'show' to see quotes from great luminaries.

Spoiler

#14
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPip
  • 716 posts
  • LocationLondon

We're not talking about views or opinions though, we're talking about funadamental motivational forces, like survival instinct, the urge to procreate, the need to eat and drink, the desire to be respected by the people around us either through fear or love. These are some of our fundamental motivations given to us by our biology, and I don't think they change much throughout our lives, or throughout human history. 

 

Essentially everything humans do or experience either causes 'good' chemicals to be released in our brains or 'bad' chemicals (or neutral/no chemicals) this is what makes us like doing some things and dislike others.

 

The AI won't have chemicals, it will have a motivation system programmed in by it's human designers which will determine what it likes and dislikes. 

 

So to try another example: if I see an attractive woman wearing no clothes, this releases certain chemicals in my brain that make me feel good, so I like looking at nude attractive women. This is an evolved response to get me to procreate, and it is also why pornography is a thing, and why sexy ladies appear in adverts for products aimed at men. 

 

You saying the AI might change its motivations, is similar to you saying I might go and have myself chemically castrated to remove all the good feelings I get from seeing attractive nude women. Yes it is possible, surely some people somewhere have done this, but the very nature of how my brain is built makes it almost impossible for me to want to cut of the flow of these positive chemicals. 

 

Doing this would require me to want to feel terrible, and since my whole brain is wired to generally tray and avoid feeling terrible this is highly unlikely. 



#15
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
ceh3D9-3RNcoclobIfkeBKL2TKJORK8e-v6e9djP

https://www.weforum....fight-criminals




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users