Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday September 05 2019, @02:29AM   Printer-friendly
from the stop-the-world-I-want-to-get-off dept.

Arthur T Knackerbracket has found the following story:

Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button.

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table.

Still, the authors clearly appear to favor an artificial intelligence-based solution.

“Nuclear deterrence creates stability and depends on an adversary’s perception that it cannot destroy the United States with a surprise attack, prevent a guaranteed retaliatory strike, or prevent the United States from effectively commanding and controlling its nuclear forces,” they write. “That perception begins with an assured ability to detect, decide, and direct a second strike. In this area, the balance is shifting away from the United States.”

History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon. Perhaps the most famous such hero was Stanislav Petrov, a Soviet lieutenant colonel, who was the officer on duty in charge of the Soviet Union’s missile-launch detection system when it registered five inbound missiles on Sept. 26, 1983. Petrov decided the signal was in error and reported it as a false alarm. It was. Whether an artificial intelligence would have reached the same decision is, at the least, uncertain.

One of the risks of incorporating more artificial intelligence into the nuclear command, control, and communications system involves the phenomenon known as automation bias. Studies have shown that people will trust what an automated system is telling them. In one study, pilots who told researchers that they wouldn’t trust an automated system that reported an engine fire unless there was corroborating evidence nonetheless did just that in simulations. (Furthermore, they told experimenters that there had in fact been corroborating information, when there hadn’t.)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by ElizabethGreene on Thursday September 05 2019, @05:09PM

    by ElizabethGreene (6748) Subscriber Badge on Thursday September 05 2019, @05:09PM (#890124) Journal

    The problem with the AI we have today is that it learns based on the dataset it is given. If I train a self driving car on a dataset taken on a highway it's going to learn to classify highway things. It will understand cars, stalled cars, guardrails, plastic bags, bridge expansion joints, construction barrels, roadkill, and the millions of other things that one would expect to see in a sufficiently large data set of highway miles. This training dataset constitutes the entire life experience of the AI. Unfortunately nothing in that experience prepares it for astoundingly rare boundary conditions like a broken-down circus truck that drops an elephant on the highway. Its response will be unpredictable, and because of the way AI works it can take months or years of expert study to understand why it made the decision it did.

    So what does that have to do with TFA? Military strategists have recognized the value of surprise for millennia. Surprise is by definition a rare boundary condition, the thing that causes AI behave unpredictably.

    I don't think AI is a good fit for this use case because of this unpredictability. Even if you ignore all of the completely legitimate fears about software bugs you are still left with the question of "What will it do?". The answer to that will remain unknown until it is too late to change it.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3