Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday April 25 2018, @12:36PM   Printer-friendly
from the learn-to-love-the-bomb dept.

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed. Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

[...] Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by VLM on Wednesday April 25 2018, @02:59PM (4 children)

    by VLM (445) on Wednesday April 25 2018, @02:59PM (#671642)

    With respect to liberal arts grads talking about AI, the only mental model they have is Harry Potter magic, so they tend to make stupid predictions having no connection with reality, like leftist economists or similar. In the 50s, a megaflop of processing power predicted the weather a couple days in advance. Therefore with a thousand times the processing power a gigaflop should predict the weather forecast pretty accurately a thousand times a couple days into the future, lets call it a decade. Yeah that didn't work out so well even with bazillion exaflops of parallel processing. You do get a gain, maybe 2 or 3 days more prediction, which is useful; not a decade of prediction. Likewise the effect of AI according to liberal arts grads who know nothing about science or math or economics would assume that an exaflop of processing predicts military activity six hours in advance so a million times the processing power means we'll predict military activity a million times longer into the future, lets say a quarter million days in to the future or roughly six centuries; close enough to be perfect not to matter. The real world effect of AI a million times smarter than our existing algos that work six hours into the future is far more likely to roll out around eight hours into the future, or perhaps twelve at the most.

    That doesn't mean AI is worthless; two hours of stock market trades, at a hundred million trades per day in 8 hours and each transaction is a hundred bucks, a AI with an extra two hours of insight could predict about 2.5 billion bucks worth of stock trades... Surely you could profit a couple percent off that somehow. So insert much financial hand waving if the AI squeezes a hundred million bucks out of the stock market per day and it cost a hundred billion bucks to develop, based on current and predicted interest rates did it run a profit? Damnfino.

    The other aspect of the problem is the article is deliberately abstract and obtuse. Make a game, find a buddy, roll 50 dice under 50 cups each represents a year. If war is declared, then flip the next cup, then the war declarer rolls a dice and if he beats the dice under the cup he just won the nuclear war. Of course the players have no idea what number is under the next cup. Is it a 1 which is pretty easy to beat, or a 6 that can't be beaten auto-lose the nuclear war. Every turn is the players decide if they're going to war or not, then they go to war or not, then for the hell of it they flip over a cup and see what would have happened if they had gone to war, then fifty cups/years later the game is over. It is predictable given intelligent enough players that most players will never go to war as the random dice on average are loaded slightly against them by design by one pip. Making this game boring as hell. Now add AI to the game. At the start of each turn in public each player rolls a D20 and on a natural 20 that one player discovers AI. Then that player gets to look under all the remaining covered cups for the rest of the game to find all the 1s. In that case every game will end in nuclear war because either the AI player gets greedy and rolls against the dice he knows are 1 because the other players odds are somewhat less than 50:50 winning but his odds are exactly 5:6 winning. Or the other player declares war immediately when AI is discovered because he knows the AI player will declare war when the odds are better than 3:6 and his odds are never higher than 3:6 so his best chance of winning for the remainder of the game is to start a war when the AI player is stuck with whatever random die was under the cup at the moment AI was discovered. In a way this game is also pretty boring, 100% odds of nuclear war started by the player who doesn't discover AI by rolling a natural 20 on the D20, or even if the other player is a pacifist, meaning suicidal, darn near 100% odds of nuclear war when the AI player sees a 1.

    I would have to model the situation where both players discover AI simultaneously. If the non-AI player discovers AI, the existing AI player should declare war immediately if his odds are better than 3:6, right? If both players know there's a 1 dice coming up, they're both gonna wanna launch, right?

    Technically a better model than 50 cups for 50 years would be 100 cups for 50 years "two cups one year". Two cups one year... that reminds me of a famous internet video involving two girls and one cup. Noobs unfamiliar with this game theory problem should really google up 2G1C for further research purposes if they're in a safe for NSFW location (See I'm not a total asshole, I put in a slight warning)

    Seriously though my little homemade game theory simulation accurately depicts what the paper is proposing. No future oracle means MAD, someone has a perfect future oracle means mandatory fun day cannot be avoided by rational players. The reality of course is stupid liberal arts (or RAND business majors) don't understand anything about statistics or scalability or predictions or AI and just see the whole AI topic as identical to Harry Potter grade bullshit of peering into the future. A lot of leftist types have pretty severe issues with being told Harry Potter is not real; this article is a living example of why people need to understand that fact no matter how much it hurts their feelz.

    Starting Score:    1  point
    Moderation   +2  
       Troll=1, Insightful=3, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Interesting) by VLM on Wednesday April 25 2018, @03:28PM (3 children)

    by VLM (445) on Wednesday April 25 2018, @03:28PM (#671653)

    Oh and in my nuclear war game, I forgot to express the odds:

    Without AI, the odds of winning a nuclear war that you start are 2:6 or one third, or the odds are two thirds you'll lose, so the odds of a war starting by either player are roughly zero because who ever starts a war has 2/3 chance of losing it.

    With AI on the turn AI is discovered the odds remain 1/3 of winning for both sides against a random dice, so the opposing non-AI player starts a war that turn and wins 1/3 of the time AKA the dude who discovers AI wins 2/3 of the time.

    With AI on later turns, we'll assume AI is discovered early enough that the AI player has at least a single hidden "1" die meaning his odds of winning are 5/6, so if the non-AI player is a crazy suicidal pacifist and doesn't immediately start a war, the pacifist has 5/6 odds of dying in a nuclear war vs if they immediately launch they only have 4/6 odds of dying in a nuclear war.

    Leading to the peculiar situation where the behavior least likely to kill yourself and your nation, by a ratio of 4/6 vs 5/6, is to start a nuclear war the instant its discovered or strongly believed the other side has discovered AI.

    There are other implications. A player that decides to invest in AI raises the odds of nuclear war from 0% to 100% but has unchanged odds of winning that nuclear war from 1/3 before discovery to 2/3 the turn of AI discovery when the other side launches, and post discovery if they're playing against a pacifist, their odds of winning increase from 1/3 before discovery to 5/6 after discovery. Inventing AI means 100% odds of nuclear war, but you individual odds of winning that war increase from 1/3 to either 2/3 against a rational opfor or 5/6 against a crazy pacifist. So there's a strong motivator to research AI by rolling that D20.

    Essentially the game simplifies down to you're rolling a D20 and when someone rolls a natural 20 discovering AI, they have a 2/3 chance of winning the game when their opfor rolls a D3. And there's enough turns (the remainder of time?) such that the odds of rolling a natural 20 are 100% eventually.

    Of course the gains from discovering that AI secretly are so high that even if you signed a treaty with the other player you'd be an idiot not to roll dice in secret even if it takes 100 times longer than rolling in public. This is not a problem that can be solved by treaties, all that can do is kick the can down the road, with a treaty and secret private rolling that takes 100 times longer that just means the 100% odds of nuclear war take on average 100 times longer to start, but the side that discovers first always wins the war 2/3 of the time.

    One semi-realistic treaty-ish way to survive is changing the AI discovery from sudden step function to tit-for-tat (which has nothing to do with mardi gras in New Orleans although it should) and a deep analysis of that will take longer than consuming this cup of freshly brewed black tea took me. Intuitively it seems a nice smooth linear function where both sides slowly slope up from no AI at all to gradually achieving perfect Harry Potter magic accurate AI would not result in warfare... or would it? I suppose if you knew the opfor had a 5/6 chance of winning you could create treaty obligations far outside the bounds of the game such that whenever a side saw its odds were above 3:6 it had to declare a federal holiday and give all the missile crews the day off, such that half of the time per treaty obligations your missiles are down. Which ironically out of the game would probably cause WWIII because that would be a great time to invade Europe going either west or east, probably leading eventually to nuclear escalation at a later time. So WWIII would be declared when one opfor has two "1" die in a row as proven by AI. OR again you're better off starting early so two turns before opfor has double "1" dies is when you roll the tanks in Europe... hmm gotta run the odds on that one.

    • (Score: 2) by DannyB on Wednesday April 25 2018, @06:15PM (2 children)

      by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:15PM (#671756) Journal

      who ever starts a war has 2/3 chance of losing it.

      Assumes sanity of person who is able to start a nuclear war.

      Assumes a condition where two differently crazy world leaders might be able to escalate a situation until it becomes a nuclear war. While the two leaders don't even understand what is happening as it gets out of control.

      One day people of planet Earth might put such people into power.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 3, Insightful) by Azuma Hazuki on Wednesday April 25 2018, @08:38PM (1 child)

        by Azuma Hazuki (5086) on Wednesday April 25 2018, @08:38PM (#671845) Journal

        Hate to break this to you, but...Iran, North Korea, Russia, the United States...it's already happened.

        --
        I am "that girl" your mother warned you about...
        • (Score: 2) by DannyB on Wednesday April 25 2018, @08:42PM

          by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @08:42PM (#671847) Journal

          Yes. I am uncontrollably involuntarily sarcastic.

          --
          The lower I set my standards the more accomplishments I have.