25th April 2018
By 2040, artificial intelligence could upend nuclear stability
A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by 2040.
While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.
During the Cold War, the condition of mutual assured destruction (MAD) maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. MAD thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.
The new RAND publication says that in the coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.
Nations may pursue first-strike capabilities to gain leverage over their rivals even if they have no intention of attacking, researchers say. This undermines strategic stability, because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.
"The connection between nuclear war and artificial intelligence is not new. In fact, the two have an intertwined history," says Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a non-profit, nonpartisan research organisation. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."
He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s, which sought to use AI to translate reconnaissance data into nuclear targeting plans.
Under fortuitous circumstances, artificial intelligence could also enhance strategic stability, by improving the accuracy of intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.
The researchers say that given future improvements, it is possible that eventually AI systems will develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilising in the longer term.
"Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author on the paper and an associate engineer. "There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."
The team based their paper – "How Might Artificial Intelligence Affect the Risk of Nuclear War?" – on information collected during a series of workshops with experts in nuclear issues, government branches, AI research, AI policy and national security. The perspective is part of a broader effort by RAND to envision critical security challenges in the world of 2040, considering the effects of political, technological, social, and demographic trends in the coming decades.
• Follow us on Twitter
• Follow us on Facebook
• Subscribe to us on YouTube