Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Robot Soldiers Discussion

Robots AI Military

  • Please log in to reply
6 replies to this topic

Poll: Robot Soldiers? (11 member(s) have cast votes)

Will robot soldiers be solve more Problems than they create?

  1. Yes (1 votes [9.09%])

    Percentage of vote: 9.09%

  2. No (1 votes [9.09%])

    Percentage of vote: 9.09%

  3. I don't Know (9 votes [81.82%])

    Percentage of vote: 81.82%

Vote Guests cannot vote

#1
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,109 posts
  • LocationLondon

Recent post here: by Caltrek talking about robot soldiers, started me thinking about robot soldiers!

 

They have some big plus sides to them in addition to the many downsides.

 

 

These are my quick thoughts on robot soldiers:

 

+

  • No human casualties to the side using robot soldiers
  • Can afford to lose many robots to avoiding harming civilians if necessary (allowing actual hearts & minds style victories/peacekeeping operations with almost no casualties)
  • The main cost of war is then money, not human lives. The majority of politicians seem to care about money. 
  • Weight of numbers: a military can put several million "boots on the ground" if they're robot boots, making insurgency much more difficult allowing conflicts to end faster.
  • Insurgency also doesn't work if your enemy can just recycle all the soldiers you kill.

 

-

  • With robots democracies may start going to war against each-other again. Or generally become much more interventionist.
  • One person can control an army of millions of soldiers, private individuals can even build large armies in secret. This has never been practical before.  
  • No chance of a military robot refusing to follow unethical orders. (unless they are also unlawful and it is programmed to check)
  • Weight of numbers: a military can put several million "boots on the ground" if they're robot boots, making insurgency much more difficult allowing far faster and easier victory when conquering and occupying other people's countries.

--

  • Skynet scenario

 

So what are peoples thoughts on this? My mind is not made up.

 

Obviously if we get Skynet killing us, or one psycho gets hold of the control codes for some huge robot army or the world ends in some other horrible way, then sure, robots are bad. but assuming the total extinction scenario doesn't happen, what would the impact be of removing friendly casualties from the cost of War? 


  • Yuli Ban and Jakob like this

#2
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,815 posts
  • LocationGeorgia

One of the big problems with autonomous warfare is the desensitization problem. Desensitization happens for a few reasons including distance, impersonality, and ease of use. Autonomous warfare essentially provides the means with which to commit genocide without having to acknowledge that you've killed your fellow man.

 

I also remember seeing a short film/movie where someone is tricked into thinking he is playing a warfare game but he is in actuality killing people in some far off country. Upon realization he becomes distraught. If we take a look at WW1 (or was it WW2?) we see a scenario where people literally lay down their arms and start frolicing with the opposing forces instead of shooting them.

 

In essence autonomous warfare allows you to kill people as easily as killing people in a video game without the slightest twinge. Now if we talk about covert operations we already have situations where executions are ordered from the oval office etc. where there is no due process and simply because you have the power you kill these people.

 

In other words these things prepare the world for a dystopian "might is right" way of doing things. Not to mention that you would need an international accord that using autonomous systems on humans should/would be illegal and then you would have rogue nations who may not abide. 

 

Relegating dirty work to machines means you can kill without triggering that little moral twinge that may save millions. 


  • caltrek and Maximus like this

The growth of computation is doubly exponential growth. 


#3
Alislaws

Alislaws

    Democratic Socialist Materialist

  • Members
  • PipPipPipPipPipPipPip
  • 1,109 posts
  • LocationLondon

 

One of the big problems with autonomous warfare is the desensitization problem. Desensitization happens for a few reasons including distance, impersonality, and ease of use. Autonomous warfare essentially provides the means with which to commit genocide without having to acknowledge that you've killed your fellow man.

I think that would make sense if it was frontline soldiers who were making the decision to start wars/invade countries etc. 

 

I don't think there is a significant difference in desensitization between a head of state ordering a special forces team to kill  someone or ordering a bunch of robots to kill someone. The only moral twinge that is altered by robot soldiers would be if the leaders of the world are genuinely avoiding conflict out of sympathy for their own troops.

 

Maybe I'm being too cynical but I don't think a lot of politicians really worry about military casualties (except through their effect on public opinion) from their perspective dying for the nation's interests is what soldiers are ​for. 

 

I don't think a man sitting at a computer controlling a robot is going to be more likely to kill than a man who is really there with his life in the balance. Imagine a military situation where an unidentified person suddenly steps into a soldiers line of sight during a fight. The Robot operator can wait a second, carefully determine if the target is an enemy or a civilian and then respond because worst case the robot gets broken. 

 

The human soldier has a fraction of a second to figure out if this person is a threat and shoot them if needed, if they hesitate and this person is an enemy then they, or one of their team-mates will  die. 

 

If a nation that tries to be moral in it's warfare uses robots, then the only reason they would ever need to use lethal measures at all is when the enemy are endangering civilians. (The only time they face symmetrical warfare is against other robot armies, where "lethal" measures really aren't, and in asymmetrical warfare as the larger side being able to take your enemies out non-lethally, and with little collateral damage is a huge advantage)

 

Of course a nation that doesn't care at all about morality could quickly and easily commit total genocide, not one of their robots would refuse those orders. Historically, soldiers quite often seem to obey those orders anyway. Anyone know of any situations where a genocidal campaign was planned which was halted by soldiers refusing orders? Of course you should never give an order that you know won't be obeyed, so its possible that many leaders have taken gentler options to avoid this so it wouldn't be obvious from history that plan A had been "Kill everyone".

 

Either way I've never got the impression that genocide only happens rarely because its too difficult​ to do. Maybe now I'm not being cynical enough. 



#4
TranscendingGod

TranscendingGod

    2020's the decade of our reckoning

  • Members
  • PipPipPipPipPipPipPipPip
  • 1,815 posts
  • LocationGeorgia
Well you point out the own flaw in your position in the latter half of your post. Soldiers carry out the orders and if they are sufficiently cynical then there is resistance to varying degrees with even Nazi Germany having this sort of resistance albeit not nearly as prominent as other historical acts. In fact if you look at coups against an autocratic dictator they often happen because the megalomaniac is invading country after country with no regard for the loss of the lives of your own countrymen.

If we want to talk about the efficacy of resistance against brazen leadership then we can do that but the fact of the matter is that autonomous warfare allows these nations and perhaps as you mention corporations/individuals to act with impunity.

So yes desensitization may not be overtly worse when the command is to robots rather than humans, although I would again contest this in a Democratic nation where a Sadam Hussein cannot easily wage war, but it is clearly a defining factor in the ability to act without having to worry about the morality of an issue with the greatest deterrence to war being the expenditure of human life along with other capital costs of course.

Expounding upon the loss of human life being a bit more serious than you would like to propound if you take a look at the Gulf War it is indeed the potential loss of human life which Sadam takes as the United States Achilles heel and for good reason. While Sadam in his previous war with Iran was willing to let 500,000 Iraqi soldiers die, the equivalent of 7.5 million American casualties, he knew that such a thing could not be palated by America writ large. The coalition of nations ultimately being one of his blunders among many.

In other words desensitization by commanders is a phenomenon where we can debate the effect of the replacement of autonomous machines for human soldiers but one where the desensitization of a Nation and its soldiers is incontrovertible.

The growth of computation is doubly exponential growth. 


#5
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 18,722 posts
  • LocationNew Orleans, LA

I'll be back with a proper post, but here's the teaser:

Consider autonomous combat robots an inevitable necessary evil. No amount of scary dystopian YouTube videos made in America will stop Russia, China, and Israel from creating them. They will see use outside combat situations and perhaps even in entertainment. When Einstein said we will fight World War IV with sticks and stones, he probably didn't mean metallic sticks and wired stones because future wars— even major wars— will almost certainly be fought entirely with machines. And remember back when I spammed Mother Meki everywhere in years long past? One of the concepts I created in that story and have been mulling running with in its own separate story is that it will be possible to take over whole nations and establish authoritarian or totalitarian regimes without any popular support as long as you have enough robots to force the peace. This will happen in real life at least once.


  • BasilBerylium likes this

And remember my friend, future events such as these will affect you in the future.


#6
Raklian

Raklian

    An Immortal In The Making

  • Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 6,696 posts
  • LocationRaleigh, NC

 

I also remember seeing a short film/movie where someone is tricked into thinking he is playing a warfare game but he is in actuality killing people in some far off country. Upon realization he becomes distraught. If we take a look at WW1 (or was it WW2?) we see a scenario where people literally lay down their arms and start frolicing with the opposing forces instead of shooting them.

 

 

You're thinking Ender's Game?


What are you without the sum of your parts?

#7
Yuli Ban

Yuli Ban

    Born Again Singularitarian

  • Moderators
  • PipPipPipPipPipPipPipPipPipPipPip
  • 18,722 posts
  • LocationNew Orleans, LA
Yesterday, I meant to expand upon my thoughts on the subject of killer robots/autonomous combat robots/mechanized police units, etc. But I didn't because the apocalyptic truth came upon me and rendered me a frightened Victorian maiden fainting at the horror of it all no, no, no, I just never got around to it.
 
Let me start off by saying that I am for combat robots, despite the fact that the risks ultimately outweigh the benefits precisely because I am not so idealistic as to believe all nations will follow the West's lead. Russia sure as hell isn't, China definitely won't, Israel won't bother listening to anyone saying they shouldn't develop them, and America is absolutely gung-ho about the prospect. All that leaves are the sane nations who repeatedly attempt to regulate things such as outer space and nuclear arms and are always completely ignored by these nations. This makes their sanity ironic because willingly disarming themselves in the face of hegemony-seeking superpowers is insane. That's like declawing a cat when three wolves are near. 
 
But beyond the geopolitics of it, there's a few things to mention.
 
1: Combat robots as the means by illiberal warfare returns
 
2: Combat robots as private militaries
 
3: Combat robots as entertainment.
 
Right now, most combat robots are miniature trackbots with sentry guns or tactical explosives attached. Russia has FEDOR, which is closer to the traditional idea of a robotic soldier/Terminator. But lord knows how many years away it is from being combat ready. Japan owns Boston Dynamics, so they have their repertoire of machines. Again, we don't know how far away they are from being combat ready— artificial intelligence is the key to it all, and while current machine learning techniques are impressive, they're still not ready for the real world.
 
 
When combat robots start coming online, it probably won't take long for them to take over entire militaries— the first fully-automated military branch could be upon us in our lifetimes.
One of the biggest issues in terms of military conflict in the liberal West is that we value human life. This is the Enlightenment's most pervasive success— it took us several hundred years, but even the lives of "pawns" are highly valued. If ever publicly considered expendable, you will see protests. 
So on one hand, it makes perfect sense to replace soldiers with robots. We don't want human beings to be harmed. Except won't these robots be facing off against humans?
Before Vietnam (and many years afterwards, up to around 9/11), the concept that our primary enemy in warfare would be guerrilla fighters/farmers and fundamentalists. We were expecting war with the Soviets, who were our equal if not greater in military ability. If we were going to face robot soldiers, they'd be clad with red stars and hammers and sickles. But a man from the gutters of Syria, originally having little in life only to have it bombed and is now looking for some purpose and revenge, isn't going to afford a Terminator. Not even a 3D printed one. Thus, we'll be siccing robots on humans. I can't even begin to think of all the ways that could go wrong. But *not* for the reasons you might think.
 
How does a robot discern between a hostile enemy and just a random person? What markers does it look for? 
 
Here's the cold and ugly truth. For AI, it's not at all difficult. We already have AI with seemingly superhuman abilities to predict behavior. Our human ability to discern unspoken intent and language is very good, but it's also very flawed. Well trained humans can spot when a person is lying, when they're about to lie, even when they're infusing lies with the truth. Even untrained people have a tendency to unconsciously pick up on these markers. Has your mother ever known that you were lying, no matter how good you tried hiding it? Has your significant other ever just "known" you weren't feeling well even though you looked otherwise normal and would even probably say you felt okay? Have you ever looked at a person and just thought "this guy's up to no good" and turned out to be right? There was something about their look, their general gaze, the way they move their mouths, the way they breathe— it all adds up to express intent. 
 
Turns out, computers can figure it out too. And because they're computers, they're stupidly better at it than we are. Have you heard about the news story of the computer that can identify depression just by your face? Or what about the one that can detect when you're gay, again just by your face? 
This is not science fiction. Computers really can tell all of this because we express these things unconsciously. The lie detector of the future might be you sitting in front of a computer.
 
So the idea of a computer being able to read the difference between hostile, neutral, and friendly intent between combatants is *very* possible, even today. 
 
The problem is the implications of this. Certain markers could be reprogrammed to mean "hostile" if a certain party desires it. 
Think of spies, those hiding partisans or repressed groups, those intending to break cerfew to partake in some fun— they will be detected. In fact, if you are too fearful of these robot soldiers, you could be made into a suspect— if you were on the "good" side, you'd certainly welcome combat robots with open arms. Yes, there'd be a tinge of fear, but that's natural. It's more the fear that comes from being around a person oppressing you, a person stronger than you who could get away with harming you, that sort of fear. This sort of fear tends to breed discontent, so what better way to root it out than stopping it early? 
 
Imagine living in a puritanical, traditionalist, ultraconservative society. Something like Saudi Arabia, where women being able to do so much as walk outside unattended is seen as noxious liberalism. If you have any thoughts deemed 'degenerate' or have any intent to do something opposite of the regime's morality or are concerned about something that you'd rather the regime not find out, you'd have nowhere to run. And you don't even need the religious government to do anything; just get your peers informed and they'll take matters into their own hands. 
 
That level of thought policing is what it'd take to make combat robots effective. And yes, we *do* have the early stages of that technology. Nothing I said is beyond reality; just beyond the capabilities of early 2018 technology. 
 
On the flip side for those with access to this technology but uninterested in using it to repress populations, you could also find out which people in a war-torn region are most likely to become insurgents and intervene before they slip too far away into fundamentalist extremism. 
 
Warfare can still be liberalized in the case of robot vs. human. You could identify hostile threats and neutralize them, then identify potential threats and intervene before they become hostile. Depending on how totalitarian you are, that could mean anything from finding out what it takes to help them and if it's possible to either assist their community or allow them safe passage out of the region, all the way to killing them or shipping them to camps.
 
Now let's move onto the next stage: robot vs. robot warfare. This is the start of illiberal warfare. You see it in movies and cartoons and video games all the time— robot lives don't matter because robots are without life. If one is with life, it's not a robot; it's a sapient construct. We exaggerate how angry robots would be serving humans without considering that if they served an AI, they'd be just as unfree and unsafe.
 
So cold fact: robots are gore fodder. We don't care about robots being ripped apart ad disemboweled (diswired?). We don't care about robots being blown to bits. Carpet nuke robots all you want, there are no families who will mourn or generations that will be lost. We would be so highly amused by the sight of Napoleonic robot armies or a robot remake of World War I because we would feel right in our hearts that no humans are suffering. Never mind that landscapes and nonhuman animals still suffer and humans will always find ways to become casualties via collateral damage. Unless we decide to only fight wars in specific parts of the world, 1984-style, which doesn't make sense considering most wars are fought over resources and land. 
 
 
 
Next topic: combat robots as private militaries. This is something I was talking about in my original post— if you have a sufficiently strong private military, you don't need popular support to rule a country. You could rule it by brute force, using fear to force people to go your way. The problem is that no private militaries on Earth are that strong. In a manner, they can't be— it costs money to sustain a military. Most effective militaries have defense budgets in the billions. There aren't that many billionaires in the world, relative to total population, and even fewer of those billionaires could sustain even a single branch of a military for more than a couple years. The raw costs of sustaining a military are already daunting, but if you want to take over a country, first you'd need to pick a sufficiently weak military. One that you would be capable of defeating. Then you'd need to find a way to keep your funds going, because once your plan of taking over a country are known, investors will flee and your stocks will just about die. Governments don't need to worry about this thanks to taxes. 
Now that you have theoretically taken over a country, you must now find a way to keep the peace. If you stop paying your military, you are going down faster than you can say "military coup". And that's just your military. You need to consider most public service workers— they're not going to work for free unless they have guns on their backs. If you try making them work for free for too long, they won't care anymore if the soldiers pull the trigger.
Combat robots? They don't need to be paid; their only costs is maintenance and replacement. You free up money for weapons and logistics. A private combat robot military on a $10 billion a year budget could probably rival a major present-day military like Russia's. 
With sufficient force, you can overthrow a country, automate public services, and rule a country as you please— any dissent can be put down with as much force as you want, because your combat robots won't rebel like humans. 
 
The third (and least pessimistic) topic: combat robots as a revival of bloodsport. The main reason why BattleBots is still a niche is because most practical robots are those aforementioned trackbots. People want humanoids and autonomous vehicles. That's just a fact. 
Again, the reason why people want it is because we value human life too much. Dystopian movies like to claim that in a few short years we'll be sending men (most likely, convicts) back to the gladiator ring to slaughter each other and get eaten by tigers— and these will bring in huge ratings for network executives. But that's just not going to happen. When the world freaked out about the "Russian Hunger Games", we thought that was the beginning of it, but the truth was that the man behind it simply said "anything goes, but there will be consequences for extreme actions." 
Have bloodsports ever returned? Undoubtedly. Are they still going on? Without question. But it's for the realm of the dark web, not mainstream TV. 
How might you get around that? With robots. Robots don't need to hold back, so you can watch fighters come at it with maximum brutality. Robots can be faster, stronger, more durable, and all around more capable than humans, allowing them to push sporting events to their limits. I referenced earlier the idea that we would watch in amusement robots re-enacting old-style war tactics. That's because we genuinely would. War re-enactments are already fun events, but imagine if you genuinely could re-enact the war— bullets, bombs, gore, and all. You can't do that with humans without being labeled a mass murderer and being sent to the Hague. But as for machines? I can already see some people signing up to watch a fully automated "World War" experience. And that's only if we haven't retreated into FIVR yet.
 
 
Those are just a tiny few of my thoughts, but I had to end it somewhere.

  • BasilBerylium, Alislaws and rennerpetey like this

And remember my friend, future events such as these will affect you in the future.






Also tagged with one or more of these keywords: Robots, AI, Military

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users