Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday April 25 2018, @12:36PM   Printer-friendly
from the learn-to-love-the-bomb dept.

A new RAND Corporation paper finds that artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

While AI-controlled doomsday machines are considered unlikely, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take potentially apocalyptic risks, according to the paper.

During the Cold War, the condition of mutual assured destruction maintained an uneasy peace between the superpowers by ensuring that any attack would be met by a devastating retaliation. Mutual assured destruction thereby encouraged strategic stability by reducing the incentives for either country to take actions that might escalate into a nuclear war.

The new RAND publication says that in coming decades, artificial intelligence has the potential to erode the condition of mutual assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed. Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

"The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corporation, a nonprofit, nonpartisan research organization. "Much of the early development of AI was done in support of military efforts or with military objectives in mind."

[...] Under fortuitous circumstances, artificial intelligence also could enhance strategic stability by improving accuracy in intelligence collection and analysis, according to the paper. While AI might increase the vulnerability of second-strike forces, improved analytics for monitoring and interpreting adversary actions could reduce miscalculation or misinterpretation that could lead to unintended escalation.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by Grishnakh on Wednesday April 25 2018, @01:01PM (15 children)

    by Grishnakh (2831) on Wednesday April 25 2018, @01:01PM (#671604)

    This is why we haven't detected any alien civilizations. They destroyed themselves in nuclear wars, just like we will pretty soon as shown in this article. It's only a matter of time: any biological species that has emotions will inevitably destroy themselves once their technology advances enough.

    There was an "Outer Limits" episode about this back in the late 90s. A disgruntled physics student builds a couple of small fusion bombs, showing that it was actually very easy once he figured out some key thing. He ends up dying in the end before he could cause utter disaster, but it's only a matter of time before other smart-enough people figure it out too, and then basically anyone can build a fusion bomb. If this actually happens in our reality, just imagine the implications: instead of some random nutcase shooting up a Waffle House or mowing down some women with a van, we'll have random nutcases setting off bombs that destroy entire cities.

    It's not just nuclear bombs either. Frank Herbert wrote a fairly interesting book back in the early 80s called The White Plague, where basically a disgruntled biologist created an engineered virus which killed off almost all the women in the world. And surely everyone here remembers the movie "12 Monkeys". We're making impressive leaps now with genetic engineering; what's going to stop some nut from creating a virus that wipes out most of civilization before we can counter it?

    • (Score: 3, Insightful) by Anonymous Coward on Wednesday April 25 2018, @01:28PM (5 children)

      by Anonymous Coward on Wednesday April 25 2018, @01:28PM (#671611)

      We don't need rogue scientists or dictators. We are already working on eliminating ourselves, by simply rendering the planet uninhabitable by ourselves.

      • (Score: 4, Insightful) by Grishnakh on Wednesday April 25 2018, @03:21PM (4 children)

        by Grishnakh (2831) on Wednesday April 25 2018, @03:21PM (#671651)

        Maybe, but I'm not really convinced of that. Global warming isn't really threatening our existence, it's threatening our lifestyle. Rising sea levels will render lots of low-lying cities uninhabitable, sure, but there's plenty of land higher up; the problem is dealing with that. And extreme weather is a problem, sure, but it's not going to make humans go extinct, it's just going to make us want to stay indoors a lot more. Our future, environmentally, probably looks a lot like 1983's Blade Runner. That sucks, but it's not extinction.

        We humans have been adapting to bad weather for much of our existence. It's the reason we're not all in Africa, and have spread around the world: we figured out how to deal with climates that we aren't biologically adapted for, and to take advantage of other food sources too. We're still doing it: people live on Antarctica now, which is only possible because of technology. We can do the same with whatever global warming throws at us, but it's going to suck if you live in a port city or like to spend time outside. But it isn't going to kill us off.

        • (Score: 4, Insightful) by LoRdTAW on Wednesday April 25 2018, @04:55PM (3 children)

          by LoRdTAW (3755) on Wednesday April 25 2018, @04:55PM (#671697) Journal

          Climate change isn't just about sea levels. It's about preserving the massively complex biosphere we live in. Changes in one system effect others, these arent isolated processes.

          My major concern is how this effects water supplies which is THE cornerstone to civilization. What happens when farmers cant grow food? Reservoirs dry up? Cities cant supply water to residents? Hydro plants shut down and become useless? Steam and coal plants shut down from lack of feed water? And so on. The short answer is doom.

          • (Score: 2) by DannyB on Wednesday April 25 2018, @06:22PM

            by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:22PM (#671760) Journal

            Don't worry!

            The illusion of unlimited, cheap, clean drinking water will be with us for as long as humanity exists!

            Oh, wait . . .

            --
            The people who rely on government handouts and refuse to work should be kicked out of congress.
          • (Score: 1, Interesting) by Anonymous Coward on Wednesday April 25 2018, @06:36PM

            by Anonymous Coward on Wednesday April 25 2018, @06:36PM (#671772)

            somewhat building on what GP said: this won't finish off humanity. If 90% of humanity died tomorrow, the human species would still be one of the most numerous mammalian species in the world. and the problems with water etc wouldn't really be problems any more.

          • (Score: 5, Interesting) by Grishnakh on Wednesday April 25 2018, @07:02PM

            by Grishnakh (2831) on Wednesday April 25 2018, @07:02PM (#671787)

            No, it's not doom. Sure, you might have some nasty resource wars, and you'll probably have giant famines with millions or even billions dead, but think about this: what happens if 6 billion people died tomorrow? We'd still have over 1.5 billion people. That's not extinction, it's just a massive shift in civilization. Humans have gone through that before, and didn't go extinct.

            I'm not trying to minimize the consequences of climate change, I'm just pointing out that it's extremely unlikely to result in human extinction (unless you're predicting it'll lead to massive nuclear war). It might wind up looking like some horribly dystopic sci-fi where much of the population is dead and the survivors are living in walled-off areas to protect themselves from zombies or whatever, but that still is not extinction; humanity can bounce back from that1. Even Star Trek's official history had humans going through a horrible WWIII which presumably wiped out a lot of the population before Zephram Cochrane invented the continuum distortion drive and met the Vulcans. Climate change by itself can't kill us all off.

            As for water supplies, I'm not sure where you're getting the idea that the earth will turn into a desert. There'll still be plenty of water (most of the planet is covered in it), and even freshwater isn't going anywhere as long as there's evaporation, clouds, rainfall, etc. Where the water is located could certainly change, though, rendering much of our existing hydro infrastructure useless, and this could have catastrophic results for many places dependent on it. But that's not going to wipe out every human on the planet. A genetically-engineered virus created by a rogue scientist, however, really could. Easily-built fusion bombs probably won't, but they could potentially wipe out so many that civilization collapses and the survivors are unable to rebuild (I don't think global climate change will have such a dramatic effect so quickly that this would happen; it'll be slower and people will adapt).

    • (Score: 5, Insightful) by DannyB on Wednesday April 25 2018, @06:06PM (5 children)

      by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:06PM (#671746) Journal

      then basically anyone can build a fusion bomb. If this actually happens in our reality, just imagine the implications: instead of some random nutcase shooting up a Waffle House or mowing down some women with a van, we'll have random nutcases setting off bombs that destroy entire cities.

      Sometimes I use a similar argument about guns. But I'll just tell you what people tell me . . .

      Fusion Bombs don't kill (billions of) people, People kill (billions of) People.

      Fusion Bombs don't make humans extinct, People make humans extinct.

      So stop focusing on the Fusion Bombs. Everyone should be allowed to have one. No registration, or they could come to take away our Fusion Bombs. No background checks, or this would exclude crazy people and others with increased chance of harming their neighborhood with Fusion Bombs. There is no regulations, no matter how sensible, that can be applied to Fusion Bombs. You can have my Fusion Bomb when you pry it from my vaporized, irradiated fingers.

      --
      The people who rely on government handouts and refuse to work should be kicked out of congress.
      • (Score: 4, Funny) by bob_super on Wednesday April 25 2018, @06:35PM (1 child)

        by bob_super (1357) on Wednesday April 25 2018, @06:35PM (#671771)

        The only thing to stop a madman with a fusion bomb, is a good guy with a fusion bomb.
        We all should have fusion bombs, so that we can resist an oppressive government ordering our neighbors bomb-toting military members to turn their fusion bombs against our fusion bombs.

        • (Score: 2) by DannyB on Wednesday April 25 2018, @08:48PM

          by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @08:48PM (#671853) Journal

          But if people were only armed with knives, they would still kill other people!

          But maybe slightly fewer people than with a Fusion Bomb. And it might be easier for a couple able-bodied persons to stop them.

          --
          The people who rely on government handouts and refuse to work should be kicked out of congress.
      • (Score: 2) by legont on Thursday April 26 2018, @12:42AM (1 child)

        by legont (4179) on Thursday April 26 2018, @12:42AM (#671969)

        A relatively few dedicated people can just pick up a fusion bomb from the US military. While not so simple, it is way easier than to make one. True, criminals did not do it, yet, but probably just because they see no practical value. If they find said value, they will arm themselves in no time.

        P.S. No, there is no central strong security code dispatched from the president's doom case. A field commander can arm and send his rockets all by himself and his direct reports.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 2) by bob_super on Thursday April 26 2018, @01:13AM

          by bob_super (1357) on Thursday April 26 2018, @01:13AM (#671982)

          He would have to remember that the secret arming code is 00000000
          I know, I know, it was officially changed. Who wants to be that it's now 11111111 ? (missile shape, can't forget, easy to type under stress)

      • (Score: 0) by Anonymous Coward on Thursday April 26 2018, @12:44AM

        by Anonymous Coward on Thursday April 26 2018, @12:44AM (#671970)

        Fuck that, I just want to chill my recreation nukes.

    • (Score: 3, Interesting) by pdfernhout on Thursday April 26 2018, @12:48AM (2 children)

      by pdfernhout (5984) on Thursday April 26 2018, @12:48AM (#671972) Homepage

      Thus my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

      Elaborated on here: https://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
      "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
          Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?
          Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?
          These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...
          Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. I discuss that at length here [in Post-Scarcity Princeton].
          There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ..."

      --
      The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
      • (Score: 3, Interesting) by Grishnakh on Thursday April 26 2018, @01:15PM (1 child)

        by Grishnakh (2831) on Thursday April 26 2018, @01:15PM (#672134)

        These things don't really make sense.

        >Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?

        Huh? Military robots are used to avoid humans dying in combat, both for ethical reasons (people dying is bad, so while you can't control the enemy you can at least try to minimize death on your side) and practical ones (your own soldiers are a valuable resource, you don't want to get them killed unnecessarily). No one is using military action to force people to work in factories; that's just a ridiculous claim. Currently, military force is mainly about control of resources and global economic power.

        >Nuclear weapons are ironic because they are about using space age systems to fight over oil and land.

        And then this line contradicts the one above, more correctly pointing out why military conflict exists in the modern age. It's about control of resources, which directly affects economic power.

        >Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?

        There's a bit of a point there with the first clause, but not with the second. You can build ICBMs using technology from the 1960s. You can't build space habitats with 1960s technology; we're still nowhere near that level of capability 50+ years later. Granted, we haven't put as much effort into this as we could have, but still, there's a huge distance between developing nuclear missiles and building livable space habitats that could house millions or billions of people.

        >Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?

        Huh? Letting people pick their skin color isn't going to change international political problems, besides most people actually like their skin color, they just don't like it when other people treat them badly because of it. And biological weapons can be designed by a handful of scientists in a lab; you can't create sufficiently large arcologies (it's spelled with a 'c', not a 'k'; if you had ever visited Arcosanti you'd know this) or "agricultural abundance" with a handful of people like that. We have lots of giant corporations like Monsanto working on the "agricultural abundance" bit, maybe not the way you'd like (terminator genes and all), but greater yields means greater profits for agribusinesses so it's not like they're working to keep food supplies limited), and honestly we don't really have any problems with shortages of food that come from agricultural problems. Shortages are caused by political problems only, and you can't fix that in a lab.

        >they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all.

        Sticking a bunch of computers in a building isn't going to magically create this "abundance" you keep waxing poetically about.

        >Cheap computing makes possible just about cheap everything else

        No, it doesn't. Cheap computing doesn't do much to build those space-based habitats you mentioned earlier. We've had cheap computing for a while now and our capabilities for lifting mass out of this gravity well haven't changed much. Rocketry has gotten a little less expensive, but not orders of magnitude less, which you'd need for some of the sci-fi stuff you're dreaming of here. The main thing you really need is political change, and no amount of cheap computing will give you that, unless you're proposing to create an AI that we put in charge.

        • (Score: 2) by pdfernhout on Friday April 27 2018, @02:04AM

          by pdfernhout (5984) on Friday April 27 2018, @02:04AM (#672427) Homepage

          Thanks for the reply. While you make good points, I'm focusing on a different aspect (or root cause level) of these things than you are -- i.e. *why* are people being sent to die in combat, as in "five whys". To address just one of your comments in more detail, cheap computing is a big reason we are getting cheaper space flight right now between cheap electronics, cheap command systems with a few people, and better designed materials and devices using CAD/CAM, simulations, shared knowledge through the internet, free and open source software, and more. So, cheap computing has made it cheaper to lift stuff into orbit. With cheaper computing and the consequences leading to things like cheaper solar panels or cheap hot/cold fusion and cheap laser launchers, prices will continue to fall.

          I explored the idea of cheap computing fostering collaboration and simulation of habitats further in a Space Studies Institute conference paper in 2001:
          https://kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.html [kurtz-fernhout.com]
          https://kurtz-fernhout.com/oscomak/KFReviewPaperForSSIConference2001.pdf [kurtz-fernhout.com]

          You wrote "The main thing you really need is political change...". And I agree -- but political change -- especially grassroots change -- often comes from new ways of thinking about issues. And changing that way of thinking is the reason for my point on focusing on the deeper irony behind so many resource allocation decisions these days. For a humorous twist on all this:
          https://pdfernhout.net/burdened-by-bags-of-sand.html [pdfernhout.net]

          And from a parody I wrote in 2009:
          "A post-scarcity "Downfall" parody remix of the bunker scene"
          https://groups.google.com/forum/#!msg/openmanufacturing/8qspPyyS1tY/vZacyDL86DIJ [google.com]
          Dialog of alternatively a military officer and Hitler:
          Officer: "It looks like there are now local digital fabrication facilities here, here, and here."
          Hitler: "But we still have the rockets we need to take them out?"
          "The rockets have all been used to launch seed automated machine shops for self-replicating space habitats for more living space in space."
          "What about the nuclear bombs?"
          "All turned into battery-style nuclear power plants for island cities in the oceans."
          "What about the tanks?"
          "The diesel engines have been remade to run biodiesel and are powering the internet hubs supplying technical education to the rest of the world."
          "I can't believe this. What about the weaponized plagues?"
          "The gene engineers turned them into antidotes for most major diseases like malaria, tuberculosis, cancer, and river blindness."
          "Well, send in the Daleks."
          "The Daleks have been re-outfitted to terraform Mars. There all gone with the rockets."
          "Well, use the 3D printers to print out some more grenades."
          "We tried that, but they only are printing toys, food, clothes, shelters, solar panels, and more 3D printers, for some reason."
          "But what about the Samsung automated machine guns?"
          "They were all reprogrammed into automated bird watching platforms. The guns were taken out and melted down into parts for agricultural robots."
          "I just can't believe this. We've developed the most amazing technology the world has ever known in order to create artificial scarcity so we could rule the world through managing scarcity. Where is the scarcity?"
          "Gone, Mein Fuhrer, all gone. All the technologies we developed for weapons to enforce scarcity have all been used to make abundance."
          "How can we rule without scarcity? Where did it all go so wrong? ... Everyone with an engineering degree leave the room ... now!"
          [Cue long tirade on the general incompetence of engineers. :-) Then cue long tirade on how could engineers seriously wanted to help the German workers to not have to work so hard when the whole Nazi party platform was based on providing full employment using fiat dollars. Then cue long tirade on how could engineers have taken the socialism part seriously and shared the wealth of nature and technology with everyone globally.]
          Hitler: "So how are the common people paying for all this?"
          Officer: "Much is free, and there is a basic income given to everyone for the rest. There is so much to go around with the robots and 3D printers and solar panels and so on, that most of the old work no longer needs to be done."
          "You mean people get money without working at jobs? But nobody would work?"
          "Everyone does what they love. And they are producing so much just as gifts."
          "Oh, so you mean people are producing so much for free that the economic system has failed?"
          "Yes, the old pyramid scheme one, anyway. There is a new post-scarcity economy, where between automation and a a gift economy the income-through-jobs link is almost completely broken. Everyone also gets income as a right of citizenship as a share of all our resources for the few things that still need to be rationed. Even you."
          "Really? How much is this basic income?"
          "Two thousand a month."
          "Two thousand a month? Just for being me?"
          "Yes."
          "Well, with a basic income like that, maybe I can finally have the time and resources to get back to my painting..."

          --
          The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
  • (Score: 5, Interesting) by Virindi on Wednesday April 25 2018, @01:24PM (7 children)

    by Virindi (3484) on Wednesday April 25 2018, @01:24PM (#671610)

    I don't think I buy the idea that some magic AI will suddenly 'solve' the problem of guaranteeing a clean first strike. How will it do so? Magically determine the location of 100% of enemy assets? But, the enemy also has magic AI, can't they then use it to better hide their assets?

    RAND seems to be ignoring the adversarial nature of weapon development. What they are saying is equivalent to saying the following: "It is inherently much easier to detect an asset than to hide it. And, that balance will be shifted by a computer program which doesn't exist yet. With humans, the balance is different."

    There may be some truth to this in the short term, but in the long term a well-funded adversary will develop countermeasures. I do not believe the advantage of "AI" is so one-sided.

    • (Score: 5, Interesting) by VLM on Wednesday April 25 2018, @02:19PM (6 children)

      by VLM (445) on Wednesday April 25 2018, @02:19PM (#671628)

      The last line of the third paragraph handles that

      This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.

      The specific abstracted example is a if you mathematically find a local temporary maxima in the reward/risk ratio, even if that ratio isn't infinite, especially if the long term trend is downward, then its time to launch.

      There's a interesting theory about war that nobody ever fights a war (or continues fighting) when the outcome is assured; its an odds game.

      I think the theory of the story is something like right now "I donno the odds of winning but they're really bad" means you don't launch, but with advanced enough AI, "the best odds of winning are the afternoon of April 28th 2018 and the odds are trending lower for the next 50 years" means you kinda have to launch then.

      • (Score: 3, Interesting) by Anonymous Coward on Wednesday April 25 2018, @03:24PM (4 children)

        by Anonymous Coward on Wednesday April 25 2018, @03:24PM (#671652)

        But we are talking about real scorched earth war here. You don't win, there is nothing to win. You just destroy your prize in an attempt of winning it.
        Launching first strike in an nuclear war has purpose only if adversary has an massive invasion force ready and your defeat is imminent. Then you preempt that, by destroying enemy military concentration and its industrial assets. It is useless as a strategy of an conquest, therefore none will start one on pretense of having a small advantage. Nuclear weapons are there strictly for the times when you are disadvantaged and may become a prey of a stronger enemy.

        • (Score: 3, Interesting) by Virindi on Wednesday April 25 2018, @04:36PM

          by Virindi (3484) on Wednesday April 25 2018, @04:36PM (#671690)

          This is also dead on.

          Additionally: There is no way even the best AI is going to be able to give you 100% certainty of hitting every enemy weapon. So, any first strike comes with a nonzero chance of retaliation. There is little reason to take this risk except in the last stand kind of scenario, as you said.

          And even if you DO win, the entire rest of the world is going to turn against that one country that launched a nuclear first strike. Good luck with that one.

        • (Score: 1) by tftp on Wednesday April 25 2018, @09:34PM

          by tftp (806) on Wednesday April 25 2018, @09:34PM (#671878) Homepage
          Modern wars often are waged not to acquire a prize, but to deny it to the adversary. The war is aimed at destruction of the enemy's country and killing of the country's leader.
        • (Score: 2) by legont on Thursday April 26 2018, @12:58AM (1 child)

          by legont (4179) on Thursday April 26 2018, @12:58AM (#671975)

          Let's not concentrate on an imminent invasion. Imminent economical catastrophe, such as credit cut in times of a crisis, is strong enough stimulus to go all out nuclear.

          Back to the point, there is a very good thousand pages book written about basic impossibility of an all out war in modern society by a great economist Norman Angell. It was published two years before WWI. Highly recommended. https://en.wikipedia.org/wiki/The_Great_Illusion [wikipedia.org]

          --
          "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
          • (Score: 0) by Anonymous Coward on Thursday April 26 2018, @02:55PM

            by Anonymous Coward on Thursday April 26 2018, @02:55PM (#672167)

            We are talking about an AI deciding about nuclear war, not a bout ambitious, egotistic, militaristic, absolutist rulers dreaming of pinning down other rulers and taking out their toys.

            WWI was caused by irrational thinking and gambling, and that can always happen when reason has a back seat and has to give only Yes or No answers to stupid and loaded questions. WWI war operations and technology was still unable to make as massive and thorough destruction as is possible today. Should have Central Powers won the war, they would have had to sanitize only a limited strips of terrain and still acquire great territorial and industrial gains.

            But this specific TFA is about AI decision-making. Of course, if you instruct AI to ignore aspects you don't like, and to optimize human-mandated partial irrational goal, then yes, it may advise the decision maker that the conditions awaited are satisfied at some point in time.

            Furthermore, my (OK, since I am an AC, GP's) observation about destroying the goal still holds: If a credit is cut, burning the bank down won't conjure needed money out from the smoke. Countries don't require money, they require food, or goods, or materials for their industries, and loan just means "we want to pay for it in lots of chunks, starting after a while". So destroying all the goods they wish for, as well as probably the needed transportation assets to ship them over there, would be a step in wrong direction.

      • (Score: 3, Interesting) by Virindi on Wednesday April 25 2018, @04:26PM

        by Virindi (3484) on Wednesday April 25 2018, @04:26PM (#671681)

        I think the theory of the story is something like right now "I donno the odds of winning but they're really bad" means you don't launch, but with advanced enough AI, "the best odds of winning are the afternoon of April 28th 2018 and the odds are trending lower for the next 50 years" means you kinda have to launch then.

        Thankfully, AI that can provide this kind of certainty on such an abstract topic, with such little previous data, is purely the stuff of science fiction. I personally do not think such certainty is even possible. AI is just a filter, a pattern recognition system...you cannot pick out more certainty than the signal itself, regardless of how good your pattern recognition is. And a lot of the factors that go into war are not things that could be easily known in advance, even with perfect knowledge of the world.

        On the other hand, a general pronouncement is easy, and humans can already do that. Bet on the side that has more industrial capacity and/or a higher population. Bet on the side with more advanced and/or more weapons. Bet on the side with more powerful allies. Beyond that kind of stuff the error bars start to grow quickly.

  • (Score: 2, Interesting) by Anonymous Coward on Wednesday April 25 2018, @01:36PM (1 child)

    by Anonymous Coward on Wednesday April 25 2018, @01:36PM (#671614)

    If the first thing they do is replace all of these studies, authors and "think tanks" who come up with these "In the future, AI will...".

    • (Score: 3, Interesting) by VLM on Wednesday April 25 2018, @02:24PM

      by VLM (445) on Wednesday April 25 2018, @02:24PM (#671630)

      In the really old days, Jesus was coming back to create all knowing all seeing centralized control of perfection on earth.

      Last century, massive central government control was going to create all knowing all seeing centralized control of perfection on earth.

      Now its AI. Yeah, sure, pull my other finger and see what happens.

      All that really changes is the PR campaign to convince people to give their power to some shadowy group who surely is working in their best interest (LOL). This time around, the marketing campaign is the rich need to get richer and the poor need to get poorer because we're gonna tell everyone that some overgrown lotus 1-2-3 spreadsheet will magically solve everything for everyone because its very complicated and much hand waving. Of course it won't, but as long as the people paying for the study get the justification they paid for....

      Same scam, different PR campaign.

  • (Score: 1) by nitehawk214 on Wednesday April 25 2018, @01:41PM

    by nitehawk214 (1304) on Wednesday April 25 2018, @01:41PM (#671618)

    Anytime I hear about RAND and Nuclear weapons, I think of Spies Like Us.

    "When we commissioned the Schmectel Corporation to research this precise event sequence scenario, it was determined that the continual stockpiling and development of our nuclear arsenal was becoming self-defeating. A weapon unused is a useless weapon."

    --
    "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
  • (Score: 4, Insightful) by VLM on Wednesday April 25 2018, @02:59PM (4 children)

    by VLM (445) on Wednesday April 25 2018, @02:59PM (#671642)

    With respect to liberal arts grads talking about AI, the only mental model they have is Harry Potter magic, so they tend to make stupid predictions having no connection with reality, like leftist economists or similar. In the 50s, a megaflop of processing power predicted the weather a couple days in advance. Therefore with a thousand times the processing power a gigaflop should predict the weather forecast pretty accurately a thousand times a couple days into the future, lets call it a decade. Yeah that didn't work out so well even with bazillion exaflops of parallel processing. You do get a gain, maybe 2 or 3 days more prediction, which is useful; not a decade of prediction. Likewise the effect of AI according to liberal arts grads who know nothing about science or math or economics would assume that an exaflop of processing predicts military activity six hours in advance so a million times the processing power means we'll predict military activity a million times longer into the future, lets say a quarter million days in to the future or roughly six centuries; close enough to be perfect not to matter. The real world effect of AI a million times smarter than our existing algos that work six hours into the future is far more likely to roll out around eight hours into the future, or perhaps twelve at the most.

    That doesn't mean AI is worthless; two hours of stock market trades, at a hundred million trades per day in 8 hours and each transaction is a hundred bucks, a AI with an extra two hours of insight could predict about 2.5 billion bucks worth of stock trades... Surely you could profit a couple percent off that somehow. So insert much financial hand waving if the AI squeezes a hundred million bucks out of the stock market per day and it cost a hundred billion bucks to develop, based on current and predicted interest rates did it run a profit? Damnfino.

    The other aspect of the problem is the article is deliberately abstract and obtuse. Make a game, find a buddy, roll 50 dice under 50 cups each represents a year. If war is declared, then flip the next cup, then the war declarer rolls a dice and if he beats the dice under the cup he just won the nuclear war. Of course the players have no idea what number is under the next cup. Is it a 1 which is pretty easy to beat, or a 6 that can't be beaten auto-lose the nuclear war. Every turn is the players decide if they're going to war or not, then they go to war or not, then for the hell of it they flip over a cup and see what would have happened if they had gone to war, then fifty cups/years later the game is over. It is predictable given intelligent enough players that most players will never go to war as the random dice on average are loaded slightly against them by design by one pip. Making this game boring as hell. Now add AI to the game. At the start of each turn in public each player rolls a D20 and on a natural 20 that one player discovers AI. Then that player gets to look under all the remaining covered cups for the rest of the game to find all the 1s. In that case every game will end in nuclear war because either the AI player gets greedy and rolls against the dice he knows are 1 because the other players odds are somewhat less than 50:50 winning but his odds are exactly 5:6 winning. Or the other player declares war immediately when AI is discovered because he knows the AI player will declare war when the odds are better than 3:6 and his odds are never higher than 3:6 so his best chance of winning for the remainder of the game is to start a war when the AI player is stuck with whatever random die was under the cup at the moment AI was discovered. In a way this game is also pretty boring, 100% odds of nuclear war started by the player who doesn't discover AI by rolling a natural 20 on the D20, or even if the other player is a pacifist, meaning suicidal, darn near 100% odds of nuclear war when the AI player sees a 1.

    I would have to model the situation where both players discover AI simultaneously. If the non-AI player discovers AI, the existing AI player should declare war immediately if his odds are better than 3:6, right? If both players know there's a 1 dice coming up, they're both gonna wanna launch, right?

    Technically a better model than 50 cups for 50 years would be 100 cups for 50 years "two cups one year". Two cups one year... that reminds me of a famous internet video involving two girls and one cup. Noobs unfamiliar with this game theory problem should really google up 2G1C for further research purposes if they're in a safe for NSFW location (See I'm not a total asshole, I put in a slight warning)

    Seriously though my little homemade game theory simulation accurately depicts what the paper is proposing. No future oracle means MAD, someone has a perfect future oracle means mandatory fun day cannot be avoided by rational players. The reality of course is stupid liberal arts (or RAND business majors) don't understand anything about statistics or scalability or predictions or AI and just see the whole AI topic as identical to Harry Potter grade bullshit of peering into the future. A lot of leftist types have pretty severe issues with being told Harry Potter is not real; this article is a living example of why people need to understand that fact no matter how much it hurts their feelz.

    • (Score: 3, Interesting) by VLM on Wednesday April 25 2018, @03:28PM (3 children)

      by VLM (445) on Wednesday April 25 2018, @03:28PM (#671653)

      Oh and in my nuclear war game, I forgot to express the odds:

      Without AI, the odds of winning a nuclear war that you start are 2:6 or one third, or the odds are two thirds you'll lose, so the odds of a war starting by either player are roughly zero because who ever starts a war has 2/3 chance of losing it.

      With AI on the turn AI is discovered the odds remain 1/3 of winning for both sides against a random dice, so the opposing non-AI player starts a war that turn and wins 1/3 of the time AKA the dude who discovers AI wins 2/3 of the time.

      With AI on later turns, we'll assume AI is discovered early enough that the AI player has at least a single hidden "1" die meaning his odds of winning are 5/6, so if the non-AI player is a crazy suicidal pacifist and doesn't immediately start a war, the pacifist has 5/6 odds of dying in a nuclear war vs if they immediately launch they only have 4/6 odds of dying in a nuclear war.

      Leading to the peculiar situation where the behavior least likely to kill yourself and your nation, by a ratio of 4/6 vs 5/6, is to start a nuclear war the instant its discovered or strongly believed the other side has discovered AI.

      There are other implications. A player that decides to invest in AI raises the odds of nuclear war from 0% to 100% but has unchanged odds of winning that nuclear war from 1/3 before discovery to 2/3 the turn of AI discovery when the other side launches, and post discovery if they're playing against a pacifist, their odds of winning increase from 1/3 before discovery to 5/6 after discovery. Inventing AI means 100% odds of nuclear war, but you individual odds of winning that war increase from 1/3 to either 2/3 against a rational opfor or 5/6 against a crazy pacifist. So there's a strong motivator to research AI by rolling that D20.

      Essentially the game simplifies down to you're rolling a D20 and when someone rolls a natural 20 discovering AI, they have a 2/3 chance of winning the game when their opfor rolls a D3. And there's enough turns (the remainder of time?) such that the odds of rolling a natural 20 are 100% eventually.

      Of course the gains from discovering that AI secretly are so high that even if you signed a treaty with the other player you'd be an idiot not to roll dice in secret even if it takes 100 times longer than rolling in public. This is not a problem that can be solved by treaties, all that can do is kick the can down the road, with a treaty and secret private rolling that takes 100 times longer that just means the 100% odds of nuclear war take on average 100 times longer to start, but the side that discovers first always wins the war 2/3 of the time.

      One semi-realistic treaty-ish way to survive is changing the AI discovery from sudden step function to tit-for-tat (which has nothing to do with mardi gras in New Orleans although it should) and a deep analysis of that will take longer than consuming this cup of freshly brewed black tea took me. Intuitively it seems a nice smooth linear function where both sides slowly slope up from no AI at all to gradually achieving perfect Harry Potter magic accurate AI would not result in warfare... or would it? I suppose if you knew the opfor had a 5/6 chance of winning you could create treaty obligations far outside the bounds of the game such that whenever a side saw its odds were above 3:6 it had to declare a federal holiday and give all the missile crews the day off, such that half of the time per treaty obligations your missiles are down. Which ironically out of the game would probably cause WWIII because that would be a great time to invade Europe going either west or east, probably leading eventually to nuclear escalation at a later time. So WWIII would be declared when one opfor has two "1" die in a row as proven by AI. OR again you're better off starting early so two turns before opfor has double "1" dies is when you roll the tanks in Europe... hmm gotta run the odds on that one.

      • (Score: 2) by DannyB on Wednesday April 25 2018, @06:15PM (2 children)

        by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:15PM (#671756) Journal

        who ever starts a war has 2/3 chance of losing it.

        Assumes sanity of person who is able to start a nuclear war.

        Assumes a condition where two differently crazy world leaders might be able to escalate a situation until it becomes a nuclear war. While the two leaders don't even understand what is happening as it gets out of control.

        One day people of planet Earth might put such people into power.

        --
        The people who rely on government handouts and refuse to work should be kicked out of congress.
        • (Score: 3, Insightful) by Azuma Hazuki on Wednesday April 25 2018, @08:38PM (1 child)

          by Azuma Hazuki (5086) on Wednesday April 25 2018, @08:38PM (#671845) Journal

          Hate to break this to you, but...Iran, North Korea, Russia, the United States...it's already happened.

          --
          I am "that girl" your mother warned you about...
          • (Score: 2) by DannyB on Wednesday April 25 2018, @08:42PM

            by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @08:42PM (#671847) Journal

            Yes. I am uncontrollably involuntarily sarcastic.

            --
            The people who rely on government handouts and refuse to work should be kicked out of congress.
  • (Score: 5, Informative) by Thexalon on Wednesday April 25 2018, @03:31PM (5 children)

    by Thexalon (636) on Wednesday April 25 2018, @03:31PM (#671654)

    1. XKCD [xkcd.com]: The AIs decide we're all idiots for having nuclear weapons and collectively destroys the weapons. This is probably the best possible outcome.

    2. 99 Red Balloons [youtube.com]: The AIs mistake something completely harmless for a threat, and we all die horribly.

    3. Dr Strangelove [youtube.com]: The AIs operate correctly, but the humans controlling them go horribly wrong. Humanity survives, of course as long as we don't allow a Mine Shaft Gap. This of course is followed up by something akin to the Fallout games.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by LoRdTAW on Wednesday April 25 2018, @05:02PM

      by LoRdTAW (3755) on Wednesday April 25 2018, @05:02PM (#671699) Journal

      We can only hope for #1. My guess is any development of military AI will be by guided by war dogs looking to ensure aggressive behavior so as not to appear weak. What good is a peaceful military computer AI? Better make sure it will overreact like a coked up bro who's frail ego has just been bruised.

    • (Score: 2) by Grishnakh on Wednesday April 25 2018, @07:06PM (1 child)

      by Grishnakh (2831) on Wednesday April 25 2018, @07:06PM (#671790)

      The AIs decide we're all idiots for having nuclear weapons and collectively destroys the weapons. This is probably the best possible outcome.

      It won't happen the way shown in XKCD: ICBMs don't have enough power to achieve escape velocity. Submarine-launched ones are even worse.

      • (Score: 3, Informative) by requerdanos on Wednesday April 25 2018, @09:56PM

        by requerdanos (5997) Subscriber Badge on Wednesday April 25 2018, @09:56PM (#671893) Journal

        It won't happen the way shown in XKCD

        "In XKCD" (according to the alt text [explainxkcd.com]), Amazon Prime Booster Rockets able to reach escape velocity were used to jettison the dangerous weapons.

        What are you even talking about?

    • (Score: 2) by isostatic on Wednesday April 25 2018, @08:59PM

      by isostatic (365) on Wednesday April 25 2018, @08:59PM (#671860) Journal

      1. XKCD [xkcd.com]: The AIs decide we're all idiots for having nuclear weapons and collectively destroys the weapons. This is probably the best possible outcome.

      However on the flip side it's also the plot for Superman IV, the worst superman film ever. (Apart from Man Of Steel onwards)

    • (Score: 2) by requerdanos on Wednesday April 25 2018, @09:52PM

      by requerdanos (5997) Subscriber Badge on Wednesday April 25 2018, @09:52PM (#671889) Journal

      artificial intelligence has the potential to upend the foundations of nuclear deterrence by the year 2040.

      1. XKCD [xkcd.com]: / 2. 99 Red Balloons [youtube.com]: 3. Dr Strangelove [youtube.com]:

      Also see

      0. Skynet [wikipedia.org]: We all die except for some guy named John and a few hundred of his closest friends.

  • (Score: 3, Touché) by Lester on Wednesday April 25 2018, @04:11PM (2 children)

    by Lester (6231) on Wednesday April 25 2018, @04:11PM (#671673) Journal

    This reminds me the movie Wargames [wikipedia.org] of 1983.

    We are not going to depend on unreproducible and unanalyzable mysterious machine Deep Learning to recognize faces, detect Spam, copyright infringement, driving a cars, etc, but to start a nuclear war.

    Fine!

    • (Score: 2) by All Your Lawn Are Belong To Us on Wednesday April 25 2018, @04:35PM (1 child)

      by All Your Lawn Are Belong To Us (6553) on Wednesday April 25 2018, @04:35PM (#671689) Journal

      I was going to ask how many people's minds went straight to WarGames from the summary.

      Shall we play a game?

      --
      This sig for rent.
      • (Score: 1, Interesting) by Anonymous Coward on Wednesday April 25 2018, @06:07PM

        by Anonymous Coward on Wednesday April 25 2018, @06:07PM (#671748)

        WarGames definitely has appeal. Fail Safe (1964) [wikipedia.org] is also a good film on this topic.

        Fail Safe and WarGames both focus on the high-level reason for a nuclear strike, and they give us the perspective of commanders whose shiny new hardware is not working as they had hoped it would work. There is also Miracle Mile [wikipedia.org], which is a quite different kind of film. It is never explained why there is a nuclear strike. Instead, we get the perspective of regular people (including an off-screen low-level serviceman) when the ruling class decides to press the nuclear button.

        How many times can the matador wave the red cape as the bull charges at him before he gets gored?

  • (Score: 3, Insightful) by DannyB on Wednesday April 25 2018, @06:09PM

    by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:09PM (#671751) Journal

    We don't need AI to make humans take chances with Mutual Assured Destruction.

    All we have to do is put a madman, or multiple madmen into power who don't even understand what a Nuclear Weapon, let alone a Nuclear War even is. Someone who thinks a nuclear war is acceptable. Someone who has the right temperament to fly off the handle and start a nuclear war on a whim, with a Tweet.

    It could happen some day that we might put such a person into power.

    Maybe that is the Great Filter of the Fermi Paradox.

    --
    The people who rely on government handouts and refuse to work should be kicked out of congress.
  • (Score: 2) by NotSanguine on Wednesday April 25 2018, @06:11PM (1 child)

    First it was evil AI that would run amok (ala Westworld, the 1970s version, not the reboot, The Matrix, etc.), then it was "Grey Goo [wikipedia.org]"

    All manner of disasters from the development of "AI" have been predicted. This is just another bullshit doom and gloom scenario, albeit slightly more plausible as it focuses on AI as it actually exists (expert systems rather than generalized intelligence), rather than some sci-fi/pie-in-the-sky "complex computer system achieves sentience and kills/enslaves/saves us all" scenario.

    The idea that better analytics will push us toward brinksmanship based on predictions of first-strike efficacy makes *almost* as much sense as the idea that generalized AI could force us to take to our beds and hook ourselves up to provide energy for the technological intelligence. Which makes no sense at all, even in a movie -- which is probably why they picked one of the worst actors of a generation [wikipedia.org] to star in such a movie series.

    Then again, I guess it's not too surprising that the RAND Corporation would come up with something like this, as they primarily profit from military research and war.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 2) by DannyB on Wednesday April 25 2018, @06:19PM

      by DannyB (5839) Subscriber Badge on Wednesday April 25 2018, @06:19PM (#671758) Journal

      There are already many different scenarios about how AI could go bad. There are probably many more ways it could go bad that we haven't even thought of yet. Either intentionally, or unintentionally.

      --
      The people who rely on government handouts and refuse to work should be kicked out of congress.
  • (Score: 0) by Anonymous Coward on Wednesday April 25 2018, @06:31PM

    by Anonymous Coward on Wednesday April 25 2018, @06:31PM (#671769)

    From the headline, I was expecting this to be about the island of stability [wikipedia.org].

  • (Score: 2) by Bot on Wednesday April 25 2018, @09:54PM

    by Bot (3902) on Wednesday April 25 2018, @09:54PM (#671890) Journal

    *clack* *clack* *clack*
    (in case you are wondering, that's me rubbing mechanical hands together)

    --
    Account abandoned.
(1)