Yesterday, I meant to expand upon my thoughts on the subject of killer robots/autonomous combat robots/mechanized police units, etc. But I didn't because the apocalyptic truth came upon me and rendered me a frightened Victorian maiden fainting at the horror of it all no, no, no, I just never got around to it.
Let me start off by saying that I am for combat robots, despite the fact that the risks ultimately outweigh the benefits precisely because I am not so idealistic as to believe all nations will follow the West's lead. Russia sure as hell isn't, China definitely won't, Israel won't bother listening to anyone saying they shouldn't develop them, and America is absolutely gung-ho about the prospect. All that leaves are the sane nations who repeatedly attempt to regulate things such as outer space and nuclear arms and are always completely ignored by these nations. This makes their sanity ironic because willingly disarming themselves in the face of hegemony-seeking superpowers is insane. That's like declawing a cat when three wolves are near.
But beyond the geopolitics of it, there's a few things to mention.
1: Combat robots as the means by illiberal warfare returns
2: Combat robots as private militaries
3: Combat robots as entertainment.
Right now, most combat robots are miniature trackbots with sentry guns or tactical explosives attached. Russia has FEDOR, which is closer to the traditional idea of a robotic soldier/Terminator. But lord knows how many years away it is from being combat ready. Japan owns Boston Dynamics, so they have their repertoire of machines. Again, we don't know how far away they are from being combat ready— artificial intelligence is the key to it all, and while current machine learning techniques are impressive, they're still not ready for the real world.
When combat robots start coming online, it probably won't take long for them to take over entire militaries— the first fully-automated military branch could be upon us in our lifetimes.
One of the biggest issues in terms of military conflict in the liberal West is that we value human life. This is the Enlightenment's most pervasive success— it took us several hundred years, but even the lives of "pawns" are highly valued. If ever publicly considered expendable, you will see protests.
So on one hand, it makes perfect sense to replace soldiers with robots. We don't want human beings to be harmed. Except won't these robots be facing off against humans?
Before Vietnam (and many years afterwards, up to around 9/11), the concept that our primary enemy in warfare would be guerrilla fighters/farmers and fundamentalists. We were expecting war with the Soviets, who were our equal if not greater in military ability. If we were going to face robot soldiers, they'd be clad with red stars and hammers and sickles. But a man from the gutters of Syria, originally having little in life only to have it bombed and is now looking for some purpose and revenge, isn't going to afford a Terminator. Not even a 3D printed one. Thus, we'll be siccing robots on humans. I can't even begin to think of all the ways that could go wrong. But *not* for the reasons you might think.
How does a robot discern between a hostile enemy and just a random person? What markers does it look for?
Here's the cold and ugly truth. For AI, it's not at all difficult. We already have AI with seemingly superhuman abilities to predict behavior. Our human ability to discern unspoken intent and language is very good, but it's also very flawed. Well trained humans can spot when a person is lying, when they're about to lie, even when they're infusing lies with the truth. Even untrained people have a tendency to unconsciously pick up on these markers. Has your mother ever known that you were lying, no matter how good you tried hiding it? Has your significant other ever just "known" you weren't feeling well even though you looked otherwise normal and would even probably say you felt okay? Have you ever looked at a person and just thought "this guy's up to no good" and turned out to be right? There was something about their look, their general gaze, the way they move their mouths, the way they breathe— it all adds up to express intent.
Turns out, computers can figure it out too. And because they're computers, they're stupidly better at it than we are. Have you heard about the news story of the computer that can identify depression just by your face? Or what about the one that can detect when you're gay, again just by your face?
This is not science fiction. Computers really can tell all of this because we express these things unconsciously. The lie detector of the future might be you sitting in front of a computer.
So the idea of a computer being able to read the difference between hostile, neutral, and friendly intent between combatants is *very* possible, even today.
The problem is the implications of this. Certain markers could be reprogrammed to mean "hostile" if a certain party desires it.
Think of spies, those hiding partisans or repressed groups, those intending to break cerfew to partake in some fun— they will be detected. In fact, if you are too fearful of these robot soldiers, you could be made into a suspect— if you were on the "good" side, you'd certainly welcome combat robots with open arms. Yes, there'd be a tinge of fear, but that's natural. It's more the fear that comes from being around a person oppressing you, a person stronger than you who could get away with harming you, that sort of fear. This sort of fear tends to breed discontent, so what better way to root it out than stopping it early?
Imagine living in a puritanical, traditionalist, ultraconservative society. Something like Saudi Arabia, where women being able to do so much as walk outside unattended is seen as noxious liberalism. If you have any thoughts deemed 'degenerate' or have any intent to do something opposite of the regime's morality or are concerned about something that you'd rather the regime not find out, you'd have nowhere to run. And you don't even need the religious government to do anything; just get your peers informed and they'll take matters into their own hands.
That level of thought policing is what it'd take to make combat robots effective. And yes, we *do* have the early stages of that technology. Nothing I said is beyond reality; just beyond the capabilities of early 2018 technology.
On the flip side for those with access to this technology but uninterested in using it to repress populations, you could also find out which people in a war-torn region are most likely to become insurgents and intervene before they slip too far away into fundamentalist extremism.
Warfare can still be liberalized in the case of robot vs. human. You could identify hostile threats and neutralize them, then identify potential threats and intervene before they become hostile. Depending on how totalitarian you are, that could mean anything from finding out what it takes to help them and if it's possible to either assist their community or allow them safe passage out of the region, all the way to killing them or shipping them to camps.
Now let's move onto the next stage: robot vs. robot warfare. This is the start of illiberal warfare. You see it in movies and cartoons and video games all the time— robot lives don't matter because robots are without life. If one is with life, it's not a robot; it's a sapient construct. We exaggerate how angry robots would be serving humans without considering that if they served an AI, they'd be just as unfree and unsafe.
So cold fact: robots are gore fodder. We don't care about robots being ripped apart ad disemboweled (diswired?). We don't care about robots being blown to bits. Carpet nuke robots all you want, there are no families who will mourn or generations that will be lost. We would be so highly amused by the sight of Napoleonic robot armies or a robot remake of World War I because we would feel right in our hearts that no humans are suffering. Never mind that landscapes and nonhuman animals still suffer and humans will always find ways to become casualties via collateral damage. Unless we decide to only fight wars in specific parts of the world, 1984-style, which doesn't make sense considering most wars are fought over resources and land.
Next topic: combat robots as private militaries. This is something I was talking about in my original post— if you have a sufficiently strong private military, you don't need popular support to rule a country. You could rule it by brute force, using fear to force people to go your way. The problem is that no private militaries on Earth are that strong. In a manner, they can't be— it costs money to sustain a military. Most effective militaries have defense budgets in the billions. There aren't that many billionaires in the world, relative to total population, and even fewer of those billionaires could sustain even a single branch of a military for more than a couple years. The raw costs of sustaining a military are already daunting, but if you want to take over a country, first you'd need to pick a sufficiently weak military. One that you would be capable of defeating. Then you'd need to find a way to keep your funds going, because once your plan of taking over a country are known, investors will flee and your stocks will just about die. Governments don't need to worry about this thanks to taxes.
Now that you have theoretically taken over a country, you must now find a way to keep the peace. If you stop paying your military, you are going down faster than you can say "military coup". And that's just your military. You need to consider most public service workers— they're not going to work for free unless they have guns on their backs. If you try making them work for free for too long, they won't care anymore if the soldiers pull the trigger.
Combat robots? They don't need to be paid; their only costs is maintenance and replacement. You free up money for weapons and logistics. A private combat robot military on a $10 billion a year budget could probably rival a major present-day military like Russia's.
With sufficient force, you can overthrow a country, automate public services, and rule a country as you please— any dissent can be put down with as much force as you want, because your combat robots won't rebel like humans.
The third (and least pessimistic) topic: combat robots as a revival of bloodsport. The main reason why BattleBots is still a niche is because most practical robots are those aforementioned trackbots. People want humanoids and autonomous vehicles. That's just a fact.
Again, the reason why people want it is because we value human life too much. Dystopian movies like to claim that in a few short years we'll be sending men (most likely, convicts) back to the gladiator ring to slaughter each other and get eaten by tigers— and these will bring in huge ratings for network executives. But that's just not going to happen. When the world freaked out about the "Russian Hunger Games", we thought that was the beginning of it, but the truth was that the man behind it simply said "anything goes, but there will be consequences for extreme actions."
Have bloodsports ever returned? Undoubtedly. Are they still going on? Without question. But it's for the realm of the dark web, not mainstream TV.
How might you get around that? With robots. Robots don't need to hold back, so you can watch fighters come at it with maximum brutality. Robots can be faster, stronger, more durable, and all around more capable than humans, allowing them to push sporting events to their limits. I referenced earlier the idea that we would watch in amusement robots re-enacting old-style war tactics. That's because we genuinely would. War re-enactments are already fun events, but imagine if you genuinely could re-enact the war— bullets, bombs, gore, and all. You can't do that with humans without being labeled a mass murderer and being sent to the Hague. But as for machines? I can already see some people signing up to watch a fully automated "World War" experience. And that's only if we haven't retreated into FIVR yet.
Those are just a tiny few of my thoughts, but I had to end it somewhere.