Stories
Slash Boxes
Comments

SoylentNews is people

posted by azrael on Thursday November 13 2014, @08:26AM   Printer-friendly
from the reverse-polarity dept.

A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.

SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.

Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”

Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Thexalon on Thursday November 13 2014, @02:40PM

    by Thexalon (636) on Thursday November 13 2014, @02:40PM (#115558)

    The problem in a nutshell is that a lot of militaries, including the supposed good guys, would love to have one of these:

    DeSedasky: "The doomsday machine. A device which will destroy all human and animal life on earth. It is not a thing a sane man would do. The doomsday machine is designed to to trigger itself automatically. It is designed to explode if any attempt is ever made to untrigger it."
    Muffley: "Automatically? ... But, how is it possible for this thing to be triggered automatically, and at the same time impossible to untrigger?"
    Strangelove: "Mr. President, it is not only possible, it is essential. That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy... the fear to attack. And so, because of the automated and irrevocable decision making process which rules out human meddling, the doomsday machine is terrifying. It's simple to understand. And completely credible, and convincing."

    I could definitely also imagine military guys wanting something like ED-209.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2