Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Thursday May 15 2014, @04:22AM   Printer-friendly
from the t800-confirmed-to-be-attending dept.

The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.

"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.

Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday May 15 2014, @02:37PM

    by Anonymous Coward on Thursday May 15 2014, @02:37PM (#43744)

    redundant circuits constantly applying the 3 Laws.

    Assuming you're speaking about Asimov's three laws, nobody would program those into a killer robot, at least not unchanged, because they would basically turn the killer robot into a non-killer robot.

    Now if you change the order (and corresponding dependence) of the laws, then you might get something the military would possibly accept:

    1. A military robot has to follow all orders of the owner.
    2. A military robot may not harm humans, unless this would violate the first law.
    3. A military robot has to protect its existence, unless this would violate the first or second law.

    However, I suspect they would demand further reordering to:

    1. A military robot has to follow all orders of the owner.
    2. A military robot has to protect its existence, unless this would violate the first law.
    3. A military robot may not harm humans, unless this would violate the first or second law.

    OK, maybe a bit too dangerous, so make it four laws:

    1. A military robot has to follow all orders of the owner.
    2. A military robot may not harm members of the own military, unless this would violate the first law.
    3. A military robot has to protect its existence, unless this would violate the first or second law.
    4. A military robot may not harm humans, unless this would violate one of the first three laws.

    Add to that a sufficiently wide interpretation of "harm members of the own military", and I think those would be rules the military might accept. And the public would be told "we implement Asimov's rules, and we even extended them with another protection clause!" And few people would notice just how much those laws would have been subverted.

    (Note how especially the addition of the extra law would subvert the rules, since the now-fourth law is conditioned on it ...)

  • (Score: 2) by khallow on Thursday May 15 2014, @07:31PM

    by khallow (3766) Subscriber Badge on Thursday May 15 2014, @07:31PM (#43905) Journal

    I think people would have to be pretty stupid even by the extremely low standards of this thread to not figure out that fourth place is a lot lower than first place - especially after a few demonstrations of these rules in practice.