Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Monday March 11 2019, @04:36PM   Printer-friendly
from the killbots-and-cream dept.

The U.S. is seeking bids to improve its "basic" killbot to the the point where it can "acquire, identify, and engage targets at least 3X faster than the current manual process."

U.S. Army Assures Public That Robot Tank System Adheres to AI Murder Policy

Why does any of this matter? The Department of Defense Directive 3000.09, requires that humans be able to "exercise appropriate levels of human judgment over the use of force," meaning that the U.S. won't toss a fully autonomous robot into a battlefield and allow it to decide independently whether to kill someone. This safeguard is sometimes called being "in the loop," meaning that a human is making the final decision about whether to kill someone.

Industry Day for the Advanced Targeting and Lethality Automated System (ATLAS) Program. Also at Boing Boing.

Surely these will never be hacked!

Will an operator feel more trepidatious about taking life, due to not being in direct peril themselves? Or less because of greater desensitization? Anyone have any insightful links about drone operator psych outcomes? (Ed: Don't worry about it.)

Related information to inform the philosophical background of why having a human in the loop is required (they don't specify this but e.g. without the human, land mine agreements might start to apply): https://www.act.nato.int/images/stories/media/capdev/capdev_02.pdf

HEY EDITORS! I suggest a new topic: "tech and society" for stuff like this. (Ed: It's Digital Liberty.)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by Anonymous Coward on Monday March 11 2019, @06:49PM

    by Anonymous Coward on Monday March 11 2019, @06:49PM (#812853)

    There is a history of object lessons to all this, in terms of failures and accidents. One anecdote involves demonstration of the Sgt. York anti-aircraft autonomous systems, where all the brass and VIPs were seated in bleachers, and a copter on a tether was the intended target. When the system was switched on, it homed in on the whirling blades, of the exhaust fan of a porta-potty behind the bleachers rather than the helicopter, and you never saw Generals haul ass so fast! Ford corporation evidently was involved. [todayifoundout.com]

    Second, automated anti-aircraft battery in South Africa, in 2007. Cannon runs amok, kills nine fleshies [theregister.co.uk] (sorry, El Reg headline.) Fortunately, this one ran out of ammo.

    One of the first rules of warfare is that "friendly fire" isn't. Even moreso when delivered by mindless machines, or the even more mindless human operators or Fobbits.

    Starting Score:    0  points
    Moderation   +3  
       Interesting=1, Informative=2, Total=3
    Extra 'Informative' Modifier   0  

    Total Score:   3