A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.
SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.
Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”
Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.
We don't yet have the ability to make completely autonomous weapons, but we're close. We're nearly to the point where anyone could mount weaponry on a self-driving car, or on a cute robot doggie. These things will get smarter. Or perhaps some kid's experiment could unleash Grey Goo.
Discussing problems like this is one of the best uses the UN could make of its time. If no one has considered the problems, we could be in for some ugly surprises. Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities. At least mines just lay in the dirt. Having hordes of autonomous, replicating robots still wandering around and killing after the war is over would force big changes on everyone.
One relevant Star Trek episode: The Doomsday Machine.
Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities.
OK, I imagined that. I'm liking it an awful lot. What, you expected differently? This would be of tremendous benefit to my country. We wouldn't have so many soldiers dying. We could better deal with the horrible uncivilized places.