from the reverse-polarity dept.
A United Nations commission is meeting in Geneva, Switzerland today to begin discussions on placing controls on the development of weapons systems that can target and kill without the intervention of humans, the New York Times reports. The discussions come a year after a UN Human Rights Council report called for a ban (pdf) on “Lethal autonomous robotics” and as some scientists express concerns that artificially intelligent weapons could potentially make the wrong decisions about who to kill.
SpaceX and Tesla founder Elon Musk recently called artificial intelligence potentially more dangerous than nuclear weapons.
Peter Asaro, the cofounder of the International Committee for Robot Arms Control (ICRAC), told the Times, “Our concern is with how the targets are determined, and more importantly, who determines them—are these human-designated targets? Or are these systems automatically deciding what is a target?”
Intelligent weapons systems are intended to reduce the risk to both innocent bystanders and friendly troops, focusing their lethality on carefully—albeit artificially—chosen targets. The technology in development now could allow unmanned aircraft and missile systems to avoid and evade detection, identify a specific target from among a clutter of others, and destroy it without communicating with the humans who launched them.
Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.
"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."
Read the story and see the full interview here.
Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.
(Score: 2) by RobotMonster on Thursday November 13 2014, @08:34AM
It's almost like they don't want my army of self-replicating robot spiders to take over the world :-(
(Score: 4, Interesting) by takyon on Thursday November 13 2014, @09:42AM
Forget autonomous drones.
Do existing remote-controlled lethal drones with human-designated targets comply with international law?
Do drone attacks comply with international law? [politifact.com]
Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism [ohchr.org]
Resolution adopted by the Human Rights Council. 25/22. Ensuring use of remotely piloted aircraft or armed drones in counterterrorism and military operations in accordance with international law, including international human rights and humanitarian law [un.org]
United States of America Practice Relating to Rule 14. Proportionality in Attack [icrc.org]
Drones pose no risk (other than PTSD [nytimes.com]) to the life of the drone operator. It is known that drone bombings, however precisely targeted they are claimed to be, inevitably result in civilian casualties [theguardian.com]. But drones can act in ways that would be risky for human-piloted aircraft. They can linger or hover in dangerous areas without risking the life of a pilot, and they may be less noisy and more stealthy due to their lack of a cockpit. Therefore, drones that kill are convenient, yet not necessarily proportional, given that they may be engineered to subdue targets using nonlethal capabilities at no risk to a nation's soldiers and pilots.
What kinds of nonlethal capabilities? Rubber bullets or chemical weapons are a possibility [texasobserver.org], but probably not very effective at stopping targets from fleeing. Sticky foam [defensetech.org] could make a comeback given some R&D. A flying Active Denial System [wikipedia.org] (microwave weapon) probably wouldn't work due to size and power requirements. Sonic weaponry [wikipedia.org]? Better to burst a target's eardrums than kill them.
What do you do with an incapacitated target? You work with the host country. We've conducted strikes in Pakistan, Yemen, and recently "Iraq" again. They can provide boots on the ground to pick up the targets. If that can't be done, perhaps it's time to end U.S. involvement in these conflict zones.
Good luck convincing the Obama administration or a future neoconservative administration [nytimes.com] to ditch the $50k+ Hellfire missiles and convenient lethal drone strike capabilities that don't necessarily make America safe [nytimes.com].
(Score: 1) by BK on Thursday November 13 2014, @04:46PM
You seem to be suggesting that the use of remote controlled weapon systems are against "international law" because it might be possible for someone, somewhere, someday, to design and use one was that would be better [youtube.com]/more accurate/more precise/less lethal/painted with rainbows. I mean logically, that's great I guess... but taken to its natural conclusion you are suggesting that virtually everything, or at least every weapons system, is against international law because it could be better in some way.
I don't think that international law works like that.
...but you HAVE heard of me.
(Score: 1) by khallow on Thursday November 13 2014, @04:50PM
(Score: 0) by Anonymous Coward on Thursday November 13 2014, @10:43AM
The fact that they are even debating this at all is scary.
There is NO debate.
"Intelligent weapons systems are intended to reduce the risk to both innocent bystanders..."
Gotta love the word "intended".
The road to hell is paved with good intentions.
(Score: 0) by Anonymous Coward on Thursday November 13 2014, @11:59AM
Good intentions indeed. You try to host a corporate news site for your jolly club of illiterate idiots, and a bunch of free thinking antichrists post such disagreeable things to your comments section! You try to ban them, but you know they'll be bach.
The trolls are out there! They can't be bargained with. They can't be reasoned with. They don't feel pity, or remorse, or fear. And they absolutely will not stop!
(Score: 1) by albert on Thursday November 13 2014, @06:30PM
Oddly, I get the feeling you don't agree with the conclusion that such systems are obviously very important to deploy in great numbers ASAP.
The world is populated by countries with good weapons because other countries cease to exist, not counting those that have a slave-like relationship to a powerful ally and those that have nothing of value.
It's like evolution, but with countries. You will live in a country protected by these weapons. The only questions: Will your government be a direct descendant of your current government? Will these weapons be under direct control of your government, or will they be controlled by a government that supports the existence of your country in exchange for something? (you aren't likely to be in a zero-value country if you can post to soylentnews)
(Score: 0) by Anonymous Coward on Friday November 14 2014, @12:17AM
Didn't even think of this.
It is even scarier now that you mention it.
Who will be future curators of said AI weapons?
(Score: 1) by anubi on Friday November 14 2014, @02:32AM
I get the idea the whole world governments will be controlled by the banking elite, who need unlimited force to back their claims to owning the world. The whole world is in debt to them to repay interest on the dollars which only the bankers have the charter to print from thin air.
Now, how one can rightfully demand usury on that which they never had in the first place escapes me, but the inevitable outcome of allowing selected entities to print currency also means they will end up owning everything.
This has happened before. From all I can tell, all of our wars and social uprisings are the result of the wealth balance shifting too much to too few - not as reward for work, but as rights transferred through financial manipulation.
The elite are getting so few yet so wealthy that they have few people they can trust, so they need machines - with the soul of a machine - to back up their pens.
That way, they can be 'nice', knowing that anyone who disagrees with them can simply be cancelled.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by Magic Oddball on Friday November 14 2014, @10:09AM
I don't think those are (or should be) the only questions by a longshot.
I see a much more crucial question as a civilian: will the weapons my government acquires be used elsewhere in the world in actual battles, elsewhere to subjugate inconvenient civilians, or will they be used domestically to subjugate our own civilians? Will the weapons be only available to the military, or will the National Guard be allowed to (ab)use them, or will it be like so much other military tech and end up in the hands of the same militarized police forces we already have good reason to fear?
(Score: 1) by albert on Saturday November 15 2014, @08:21AM
Of course these weapons will be used.
Either your current government (or protector government) does this, or it is replaced by one that does.
I suppose it is possible to use these weapons only between nations instead of within a nation, but that seems unlikely.
(Score: 1, Flamebait) by WizardFusion on Thursday November 13 2014, @12:51PM
Will it do a better job than the trigger-happy yanks.? They kill more friendly troops than anyone.
(Score: 3, Insightful) by bzipitidoo on Thursday November 13 2014, @01:56PM
We don't yet have the ability to make completely autonomous weapons, but we're close. We're nearly to the point where anyone could mount weaponry on a self-driving car, or on a cute robot doggie. These things will get smarter. Or perhaps some kid's experiment could unleash Grey Goo.
Discussing problems like this is one of the best uses the UN could make of its time. If no one has considered the problems, we could be in for some ugly surprises. Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities. At least mines just lay in the dirt. Having hordes of autonomous, replicating robots still wandering around and killing after the war is over would force big changes on everyone.
One relevant Star Trek episode: The Doomsday Machine.
(Score: 0, Troll) by albert on Thursday November 13 2014, @06:18PM
Imagine a major power thinking to gain a decisive military advantage by employing weaponized robots against powers that have not developed such capabilities.
OK, I imagined that. I'm liking it an awful lot. What, you expected differently? This would be of tremendous benefit to my country. We wouldn't have so many soldiers dying. We could better deal with the horrible uncivilized places.
(Score: 2) by mcgrew on Thursday November 13 2014, @02:12PM
Robots say "I'll be back."
This shit never goes away.
Older than dirt? Kid, I was a BETA TESTER for dirt! We never did get all the bugs out.
(Score: 2) by Thexalon on Thursday November 13 2014, @02:40PM
The problem in a nutshell is that a lot of militaries, including the supposed good guys, would love to have one of these:
DeSedasky: "The doomsday machine. A device which will destroy all human and animal life on earth. It is not a thing a sane man would do. The doomsday machine is designed to to trigger itself automatically. It is designed to explode if any attempt is ever made to untrigger it."
Muffley: "Automatically? ... But, how is it possible for this thing to be triggered automatically, and at the same time impossible to untrigger?"
Strangelove: "Mr. President, it is not only possible, it is essential. That is the whole idea of this machine, you know. Deterrence is the art of producing in the mind of the enemy... the fear to attack. And so, because of the automated and irrevocable decision making process which rules out human meddling, the doomsday machine is terrifying. It's simple to understand. And completely credible, and convincing."
I could definitely also imagine military guys wanting something like ED-209.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.