John Markoff writes in the NYT on a new report written by a former Pentagon official who helped establish United States policy on autonomous weapons who argues that autonomous weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans "in the loop" in the process of selecting and engaging targets. "Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to 'p.m.' instead of 'a.m.,' or any of the countless frustrations that come with interacting with computers, has experienced the problem of 'brittleness' that plagues automated systems," Mr. Scharre writes.
The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin's Long Range Anti-Ship Missile, which is described as a "semiautonomous" weapon. The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship. As an alternative to completely autonomous weapons, the report advocates what it describes as "Centaur Warfighting." The term "centaur" has recently come to describe systems that tightly integrate humans and computers. Human-machine combat teaming takes a page from the field of "centaur chess," in which humans and machines play cooperatively on the same team. "Having a person in the loop is not enough," says Scharre. "They can't be just a cog in the loop. The human has to be actively engaged."
(Score: 2) by c0lo on Tuesday March 01 2016, @02:49AM
FTFY... because the hell is paved with good intentions.
Ok, let's reduce the extent of the problem: no longer is humanity at stake, but only a couple of thousands per incident - e.g. in the context of TFS, the wrong ship being hit from several hundred miles away.
For your convenience, here's the list again:
1. humans cannot be trusted to drive a car, in which the worse that can happen results in a dozen of lifes being lost, but...
2. ... when facing the risk of about 2000 lives being lost we must put our trust in humans.
Does it make more sense?
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 0) by Anonymous Coward on Tuesday March 01 2016, @08:03AM
> A car is not intended to be a weapon in any sort of reasonable sense.
How about a car with a bomb in it? Or with a gamma source in it?
The nice thing about a self-driving car bomb is that nobody has to die when the bomb goes off.
(Score: 2) by GreatAuntAnesthesia on Tuesday March 01 2016, @02:38PM
I think the real issue is that an AI driving a car only has to detect whether that solid object there is likely to be in a collision with the car, and then attempt to avoid the collision. It doesn't much care whether said solid object is a human, a car, a cow or a fallen tree branch.
An AI missile, however, has to not only make the distinction between a boat and a whale, or a boat and an iceberg, or a boat and a big mass of flotsam: It also has to distinguish between a boat full of hostiles, a boat full of refugees, a boat full of friendly combatants, a boat full of hostages with a handful of enemies aboard and so on, and respond accordingly. This is a much more difficult problem, AI-wise. It is made even more difficult by the fact that in a theatre of war you can be damned sure that your enemies will be doing all they can to confuse and misdirect your AI, something that civilian cars shouldn't have to deal with nearly so often.