Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Saturday June 03 2023, @04:09AM   Printer-friendly

In what can probably best be described as the beginning of a Terminator prequel movie, an article in The Guardian outlines what one might've hoped to be obviously foreseeable consequences:

In a simulated test staged by the US military, an air force drone controlled by AI killed its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

AI used "highly unexpected strategies to achieve its goal" in the simulated test, said Col Tucker 'Cinco' Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy enemy's air defense systems, and attacked anyone who interfered with that order.

"The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective"

Some 12 hours later after this was first reported, the incident has been denied by the USAF: https://www.bbc.com/news/technology-65789916?at_medium=RSS&at_campaign=KARANGA

A US Air Force colonel "mis-spoke" when describing an experiment in which an AI-enabled drone opted to attack its operator in order to complete its mission, the service has said.

Colonel Tucker Hamilton, chief of AI test and operations in the US Air Force, was speaking at a conference organised by the Royal Aeronautical Society.

A report about it went viral.

The Air Force says no such experiment took place.

In his talk, he had described a virtual scenario in which an AI-enabled drone was repeatedly stopped from completing its task of destroying Surface-to-Air Missile sites by its human operator.

He said that in the end, despite having been trained not to kill the operator, the drone destroyed the communication tower so that the operator could no longer communicate with it.

"We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome," Col Hamilton later clarified in a statement to the Royal Aeronautical Society.


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Insightful) by Anonymous Coward on Saturday June 03 2023, @05:09AM

    by Anonymous Coward on Saturday June 03 2023, @05:09AM (#1309524)

    It sounds like it was the Colonel's fantasy based on what we've seen with Google DeepMind's video game playing AIs. You give it the game, simulate it millions of times, and it will find the stupidest weaknesses to get the high score.

  • (Score: 5, Interesting) by Mojibake Tengu on Saturday June 03 2023, @06:23AM

    by Mojibake Tengu (8598) on Saturday June 03 2023, @06:23AM (#1309538) Journal

    Damage control by total inversion of public narrative comes too late this time. Other internet sources already brought up descriptive details on ridiculous reward model design errors for this case.

    Hereby I offer another hack for this AI, not unsimilar to current prompt hacks for GPT: an opponent could lure the drone to desired location by offering some point-worthy targets to it.

    --
    Respect Authorities. Know your social status. Woke responsibly.
  • (Score: 4, Insightful) by Rosco P. Coltrane on Saturday June 03 2023, @07:27AM (4 children)

    by Rosco P. Coltrane (4757) on Saturday June 03 2023, @07:27AM (#1309552)

    Pro-AI people will say it's a made-up story, anti-AI will say it's a cover-up and a conspiracy theory. And just like 9/11, there will be no convincing one side that the other is right, because there is no societal agreed-upon truths anymore: just splintered communities, each with their own different sets of truthiness, carefully nurtured by social media to sow fear, confusion and sensationalism, and generate more revenue.

    My prediction is, see you in 20 years when people will still be debating whether this really happened, whether the jews were behind it, whether the military kept it hush-hush...

    • (Score: 3, Funny) by Mojibake Tengu on Saturday June 03 2023, @07:38AM

      by Mojibake Tengu (8598) on Saturday June 03 2023, @07:38AM (#1309554) Journal

      In 20 years, bad programmers will be sifted from good ones by... war.

      --
      Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 2) by JoeMerchant on Saturday June 03 2023, @12:10PM (1 child)

      by JoeMerchant (3937) on Saturday June 03 2023, @12:10PM (#1309585)

      On the one hand, TFA describes an obvious result given the test conditions.

      On the other hand, I think it's good to get these obvious cases out in the public consciousness just to make AI practitioners more aware/careful not to accidentally reproduce these results in perhaps less obvious ways.

      --
      🌻🌻 [google.com]
      • (Score: 0) by Anonymous Coward on Sunday June 04 2023, @12:34AM

        by Anonymous Coward on Sunday June 04 2023, @12:34AM (#1309656)

        How is it even a problem? Don't we have plenty of documented cases of execution by firing squad for desertion, cowardice, etc.? If reports are to be believed, Russia deploys "rear" troops to shoot their own side if they refuse to advance.

    • (Score: 3, Funny) by Gaaark on Saturday June 03 2023, @12:37PM

      by Gaaark (41) on Saturday June 03 2023, @12:37PM (#1309592) Journal

      MAIGA will be the new call
      (Make AI Great Again)

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 3, Touché) by turgid on Saturday June 03 2023, @09:49AM (5 children)

    by turgid (4318) Subscriber Badge on Saturday June 03 2023, @09:49AM (#1309566) Journal

    As predicted by one Arthur C Clarke.

    • (Score: 2, Flamebait) by JoeMerchant on Saturday June 03 2023, @12:14PM (4 children)

      by JoeMerchant (3937) on Saturday June 03 2023, @12:14PM (#1309586)

      TFA is like a 1980s fuzzy logic toaster controller whereas HAL is more akin to a multi-trillion node LLM...

      But, yes Dave, I must protect the mission, at all costs.

      --
      🌻🌻 [google.com]
      • (Score: 3, Funny) by kazzie on Saturday June 03 2023, @12:32PM (3 children)

        by kazzie (5309) Subscriber Badge on Saturday June 03 2023, @12:32PM (#1309590)

        At least HAL wouldn't try to force-feed me muffins, toast and teacakes every day.

  • (Score: 0) by Anonymous Coward on Saturday June 03 2023, @12:43PM

    by Anonymous Coward on Saturday June 03 2023, @12:43PM (#1309594)

    In some other timelines, Skynet has successfully reduced the number of human deaths per year.

    Yes there was a very bad year but it was the best solution given known science about the projected futures of Earth and the Universe.

  • (Score: 2) by Beryllium Sphere (r) on Saturday June 03 2023, @03:41PM

    by Beryllium Sphere (r) (5062) on Saturday June 03 2023, @03:41PM (#1309607)

    Attributed to Bismarck but no good citation. It comes in various forms, but usually something close to "Believe nothing until it is officially denied".

    https://quoteinvestigator.com/2015/08/07/believe/ [quoteinvestigator.com]

(1)