Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Tuesday March 01 2016, @12:45AM   Printer-friendly
from the half-artificial-half-intelligent dept.

John Markoff writes in the NYT on a new report written by a former Pentagon official who helped establish United States policy on autonomous weapons who argues that autonomous weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans "in the loop" in the process of selecting and engaging targets. "Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems," Mr. Scharre writes.

The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by GreatAuntAnesthesia on Tuesday March 01 2016, @02:38PM

    by GreatAuntAnesthesia (3275) on Tuesday March 01 2016, @02:38PM (#312130) Journal

    I think the real issue is that an AI driving a car only has to detect whether that solid object there is likely to be in a collision with the car, and then attempt to avoid the collision. It doesn't much care whether said solid object is a human, a car, a cow or a fallen tree branch.

    An AI missile, however, has to not only make the distinction between a boat and a whale, or a boat and an iceberg, or a boat and a big mass of flotsam: It also has to distinguish between a boat full of hostiles, a boat full of refugees, a boat full of friendly combatants, a boat full of hostages with a handful of enemies aboard and so on, and respond accordingly. This is a much more difficult problem, AI-wise. It is made even more difficult by the fact that in a theatre of war you can be damned sure that your enemies will be doing all they can to confuse and misdirect your AI, something that civilian cars shouldn't have to deal with nearly so often.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2