Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday April 25 2018, @12:36PM   Printer-friendly
from the learn-to-love-the-bomb dept.

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed. Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

[...] Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by Virindi on Wednesday April 25 2018, @01:24PM (7 children)

    by Virindi (3484) on Wednesday April 25 2018, @01:24PM (#671610)

    I don't think I buy the idea that some magic AI will suddenly 'solve' the problem of guaranteeing a clean first strike. How will it do so? Magically determine the location of 100% of enemy assets? But, the enemy also has magic AI, can't they then use it to better hide their assets?

    RAND seems to be ignoring the adversarial nature of weapon development. What they are saying is equivalent to saying the following: "It is inherently much easier to detect an asset than to hide it. And, that balance will be shifted by a computer program which doesn't exist yet. With humans, the balance is different."

    There may be some truth to this in the short term, but in the long term a well-funded adversary will develop countermeasures. I do not believe the advantage of "AI" is so one-sided.

    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Interesting) by VLM on Wednesday April 25 2018, @02:19PM (6 children)

    by VLM (445) on Wednesday April 25 2018, @02:19PM (#671628)

    The last line of the third paragraph handles that

    This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

    The specific abstracted example is a if you mathematically find a local temporary maxima in the reward/risk ratio, even if that ratio isn't infinite, especially if the long term trend is downward, then its time to launch.

    There's a interesting theory about war that nobody ever fights a war (or continues fighting) when the outcome is assured; its an odds game.

    I think the theory of the story is something like right now "I donno the odds of winning but they're really bad" means you don't launch, but with advanced enough AI, "the best odds of winning are the afternoon of April 28th 2018 and the odds are trending lower for the next 50 years" means you kinda have to launch then.

    • (Score: 3, Interesting) by Anonymous Coward on Wednesday April 25 2018, @03:24PM (4 children)

      by Anonymous Coward on Wednesday April 25 2018, @03:24PM (#671652)

      But we are talking about real scorched earth war here. You don't win, there is nothing to win. You just destroy your prize in an attempt of winning it.
      Launching first strike in an nuclear war has purpose only if adversary has an massive invasion force ready and your defeat is imminent. Then you preempt that, by destroying enemy military concentration and its industrial assets. It is useless as a strategy of an conquest, therefore none will start one on pretense of having a small advantage. Nuclear weapons are there strictly for the times when you are disadvantaged and may become a prey of a stronger enemy.

      • (Score: 3, Interesting) by Virindi on Wednesday April 25 2018, @04:36PM

        by Virindi (3484) on Wednesday April 25 2018, @04:36PM (#671690)

        This is also dead on.

        Additionally: There is no way even the best AI is going to be able to give you 100% certainty of hitting every enemy weapon. So, any first strike comes with a nonzero chance of retaliation. There is little reason to take this risk except in the last stand kind of scenario, as you said.

        And even if you DO win, the entire rest of the world is going to turn against that one country that launched a nuclear first strike. Good luck with that one.

      • (Score: 1) by tftp on Wednesday April 25 2018, @09:34PM

        by tftp (806) on Wednesday April 25 2018, @09:34PM (#671878) Homepage
        Modern wars often are waged not to acquire a prize, but to deny it to the adversary. The war is aimed at destruction of the enemy's country and killing of the country's leader.
      • (Score: 2) by legont on Thursday April 26 2018, @12:58AM (1 child)

        by legont (4179) on Thursday April 26 2018, @12:58AM (#671975)

        Let's not concentrate on an imminent invasion. Imminent economical catastrophe, such as credit cut in times of a crisis, is strong enough stimulus to go all out nuclear.

        Back to the point, there is a very good thousand pages book written about basic impossibility of an all out war in modern society by a great economist Norman Angell. It was published two years before WWI. Highly recommended. https://en.wikipedia.org/wiki/The_Great_Illusion [wikipedia.org]

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 0) by Anonymous Coward on Thursday April 26 2018, @02:55PM

          by Anonymous Coward on Thursday April 26 2018, @02:55PM (#672167)

          We are talking about an AI deciding about nuclear war, not a bout ambitious, egotistic, militaristic, absolutist rulers dreaming of pinning down other rulers and taking out their toys.

          WWI was caused by irrational thinking and gambling, and that can always happen when reason has a back seat and has to give only Yes or No answers to stupid and loaded questions. WWI war operations and technology was still unable to make as massive and thorough destruction as is possible today. Should have Central Powers won the war, they would have had to sanitize only a limited strips of terrain and still acquire great territorial and industrial gains.

          But this specific TFA is about AI decision-making. Of course, if you instruct AI to ignore aspects you don't like, and to optimize human-mandated partial irrational goal, then yes, it may advise the decision maker that the conditions awaited are satisfied at some point in time.

          Furthermore, my (OK, since I am an AC, GP's) observation about destroying the goal still holds: If a credit is cut, burning the bank down won't conjure needed money out from the smoke. Countries don't require money, they require food, or goods, or materials for their industries, and loan just means "we want to pay for it in lots of chunks, starting after a while". So destroying all the goods they wish for, as well as probably the needed transportation assets to ship them over there, would be a step in wrong direction.

    • (Score: 3, Interesting) by Virindi on Wednesday April 25 2018, @04:26PM

      by Virindi (3484) on Wednesday April 25 2018, @04:26PM (#671681)

      I think the theory of the story is something like right now "I donno the odds of winning but they're really bad" means you don't launch, but with advanced enough AI, "the best odds of winning are the afternoon of April 28th 2018 and the odds are trending lower for the next 50 years" means you kinda have to launch then.

      Thankfully, AI that can provide this kind of certainty on such an abstract topic, with such little previous data, is purely the stuff of science fiction. I personally do not think such certainty is even possible. AI is just a filter, a pattern recognition system...you cannot pick out more certainty than the signal itself, regardless of how good your pattern recognition is. And a lot of the factors that go into war are not things that could be easily known in advance, even with perfect knowledge of the world.

      On the other hand, a general pronouncement is easy, and humans can already do that. Bet on the side that has more industrial capacity and/or a higher population. Bet on the side with more advanced and/or more weapons. Bet on the side with more powerful allies. Beyond that kind of stuff the error bars start to grow quickly.