A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.
SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.
Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”
Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.
(Score: 4, Interesting) by takyon on Thursday November 13 2014, @09:42AM
Forget autonomous drones.
Do existing remote-controlled lethal drones with human-designated targets comply with international law?
Do drone attacks comply with international law? [politifact.com]
Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism [ohchr.org]
Resolution adopted by the Human Rights Council. 25/22. Ensuring use of remotely piloted aircraft or armed drones in counterterrorism and military operations in accordance with international law, including international human rights and humanitarian law [un.org]
United States of America Practice Relating to Rule 14. Proportionality in Attack [icrc.org]
Drones pose no risk (other than PTSD [nytimes.com]) to the life of the drone operator. It is known that drone bombings, however precisely targeted they are claimed to be, inevitably result in civilian casualties [theguardian.com]. But drones can act in ways that would be risky for human-piloted aircraft. They can linger or hover in dangerous areas without risking the life of a pilot, and they may be less noisy and more stealthy due to their lack of a cockpit. Therefore, drones that kill are convenient, yet not necessarily proportional, given that they may be engineered to subdue targets using nonlethal capabilities at no risk to a nation's soldiers and pilots.
What kinds of nonlethal capabilities? Rubber bullets or chemical weapons are a possibility [texasobserver.org], but probably not very effective at stopping targets from fleeing. Sticky foam [defensetech.org] could make a comeback given some R&D. A flying Active Denial System [wikipedia.org] (microwave weapon) probably wouldn't work due to size and power requirements. Sonic weaponry [wikipedia.org]? Better to burst a target's eardrums than kill them.
What do you do with an incapacitated target? You work with the host country. We've conducted strikes in Pakistan, Yemen, and recently "Iraq" again. They can provide boots on the ground to pick up the targets. If that can't be done, perhaps it's time to end U.S. involvement in these conflict zones.
Good luck convincing the Obama administration or a future neoconservative administration [nytimes.com] to ditch the $50k+ Hellfire missiles and convenient lethal drone strike capabilities that don't necessarily make America safe [nytimes.com].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 1) by BK on Thursday November 13 2014, @04:46PM
You seem to be suggesting that the use of remote controlled weapon systems are against "international law" because it might be possible for someone, somewhere, someday, to design and use one was that would be better [youtube.com]/more accurate/more precise/less lethal/painted with rainbows. I mean logically, that's great I guess... but taken to its natural conclusion you are suggesting that virtually everything, or at least every weapons system, is against international law because it could be better in some way.
I don't think that international law works like that.
...but you HAVE heard of me.
(Score: 1) by khallow on Thursday November 13 2014, @04:50PM