A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.
SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.
Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”
Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.
Good intentions indeed. You try to host a corporate news site for your jolly club of illiterate idiots, and a bunch of free thinking antichrists post such disagreeable things to your comments section! You try to ban them, but you know they'll be bach.
The trolls are out there! They can't be bargained with. They can't be reasoned with. They don't feel pity, or remorse, or fear. And they absolutely will not stop!
Due to excessive bad posting from this IP or Subnet, comment posting has temporarily been disabled.