John Markoff writes in the NYT on a new report written by a former Pentagon official who helped establish United States policy on autonomous weapons who argues that autonomous weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans "in the loop" in the process of selecting and engaging targets. "Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems," Mr. Scharre writes.
The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."
(Score: 2) by VanderDecken on Tuesday March 01 2016, @04:00AM
Pfft. Wrong feedback loop for combat operations. If for no other reasons that at any given time 7 of your 9 with "passkeys" may already be dead. So much for your weapon system.
One person is the operator. For significant targets or weapons with greater destruction, you have at least one other person (sometimes an officer, depending on the circumstances) validating the target and the decision to engage, but it's a check protocol and not technologically enforced. Otherwise the operator Is It.
The feedback loop is simple. If the killing is justified, that's warfare. If it's unjustified, it's murder and we have military courts to deal with it. Take the human out of the loop, and you no longer have the feedback loop as a check.
For brevity, I'm not going into what constitutes justified vs unjustified, but anyone who has studied military law should have a good feeling for the difference.
I am a software architect and was once a soldier. I *want* humans in the loop. Fully autonomous weapon systems scare the shit out of me.
The two most common elements in the universe are hydrogen and stupidity.