Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Wednesday April 25 2018, @12:36PM   Printer-friendly
from the learn-to-love-the-bomb dept.

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed. Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

[...] Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by NotSanguine on Wednesday April 25 2018, @06:11PM (1 child)

    First it was evil AI that would run amok (ala Westworld, the 1970s version, not the reboot, The Matrix, etc.), then it was "Grey Goo [wikipedia.org]"

    All manner of disasters from the development of "AI" have been predicted. This is just another bullshit doom and gloom scenario, albeit slightly more plausible as it focuses on AI as it actually exists (expert systems rather than generalized intelligence), rather than some sci-fi/pie-in-the-sky "complex computer system achieves sentience and kills/enslaves/saves us all" scenario.

    The idea that better analytics will push us toward brinksmanship based on predictions of first-strike efficacy makes *almost* as much sense as the idea that generalized AI could force us to take to our beds and hook ourselves up to provide energy for the technological intelligence. Which makes no sense at all, even in a movie -- which is probably why they picked one of the worst actors of a generation [wikipedia.org] to star in such a movie series.

    Then again, I guess it's not too surprising that the RAND Corporation would come up with something like this, as they primarily profit from military research and war.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by DannyB on Wednesday April 25 2018, @06:19PM

    by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:19PM (#671758) Journal

    There are already many different scenarios about how AI could go bad. There are probably many more ways it could go bad that we haven't even thought of yet. Either intentionally, or unintentionally.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.