On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.
Halfway to the missile's destination, it severed communication with its operators and alone without human oversight the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.
Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill -- prompting the U.N. to hear testimony from representatives of its member states about whether or not military robots should be able to designate targets without direct human intervention.
"Killer robots would threaten the most fundamental of rights and principles in international law," Steve Goose, arms division director at Human Rights Watch, an international non-governmental organization, told the AFP news agency. "We don't see how these inanimate machines could understand or respect the value of life, yet they would have the power to determine when to take it away."
(Score: 3, Funny) by wonkey_monkey on Saturday November 15 2014, @12:52PM
foreach(target_list as target) {
switch (target) {
case 'eeny' :
case 'meeny' :
case 'miny' : continue;
case 'mo' : destroy();
}
}
systemd is Roko's Basilisk
(Score: 0) by Anonymous Coward on Saturday November 15 2014, @04:15PM
foreach(target_list as target) {
if (wonkey_monkey) { target = 'mo'; }
switch (target) {
case 'eeny' :
case 'meeny' :
case 'miny' : continue;
case 'mo' : destroy();
}
}
Wow, I've never hacked before. That was fun.
(Score: 2) by davester666 on Saturday November 15 2014, @11:23PM
This line:
case 'miny' : continue;
will be fixed to:
case 'miny' :
(Score: 4, Interesting) by Justin Case on Saturday November 15 2014, @01:15PM
This is where all the science fiction writers got it wrong. When autonomous robots finally arrive, they will not be pseudo-slaves ushering in an era of leisure and luxury for all. They will be programmed to serve and protect their owners and to hell with the rest of us.
Said owners are likely to be the usual crop of arrogant self-confident "git er done" types who never worry about what might go wrong, because by the time it does they'll have moved on. Except that when the robot army attains sentience and turns on its masters, there won't be anyplace to move on to.
Eventually, then, the asshats of the human race will be purged, which is likely a good thing. Too bad the rest of us won't be around to see it.
(Score: 0) by Anonymous Coward on Monday November 17 2014, @06:45AM
The machines won't turn on their masters. They are machines.
The masters will turn on each other, which will have the same effect.
PKD's "Second Variety" seems oddly optimistic.
(Score: 1) by gznork26 on Saturday November 15 2014, @03:25PM
In a way, we're replaying a scene from hisory. Nations banded together to declare how wars were to be fought. Doing so made those nations seem 'civilized' and any player that flaunted those rules was 'barbaric', subhuman, and not worth treating with respect. Only this time, the distinction is whether people choose the target, rather than whether those targets are military in nature, and the weapons are nerve agents.
Mark Twain's most dangerous work comes to mind: "The War Prayer"
(Score: 2) by Jeremiah Cornelius on Saturday November 15 2014, @04:11PM
https://www.youtube.com/watch?v=bX7V6FAoTLc [youtube.com]
You're betting on the pantomime horse...
(Score: 3, Interesting) by wantkitteh on Saturday November 15 2014, @07:16PM
Currently, only 4% of drone strike victims are even vaguely identified - saw a great presentation at ORGCON today on the subject, the CIA barely give even the tiniest bit of a shit who they kill. Hell, they blew some dude up to get some other guy to go to his funeral so they could blow that up! 80+ collateral murders and they don't even know if they got the guy they wanted. Even Microsoft couldn't write a missile targeting system that fucked up. Bring it on!
http://www.drones.pitchinteractive.com/ [pitchinteractive.com]
(Score: 0) by Anonymous Coward on Sunday November 16 2014, @03:54AM
Inanimate machines, of course, do not understand or respect the value of life. That is the job of those who build and use such machines. It is the people who use these machines that decide when to take life away. The real value of these intelligent munitions is that they can be given some "split second" power, on the fly, to make decisions on what to strike or not to strike. This could be particularly valuable in situations where the feedback loop between munition and human operator is too long to make that determination. In the best case scenario this could actually help to avoid "collateral damage".
I can think of at least one obvious possibility where this could come in handy. Suppose, as the cruise missile comes in for a strike, its AI agent recognizes that there are children in the strike zone. If it could then at the last possible moment before impact disable the munition, or otherwise abort the strike avoiding a massacre of children, that would IMHO be a very good thing.
(Score: 2, Insightful) by pkrasimirov on Sunday November 16 2014, @08:45AM
The problem is not in the good-case scenario, as you might have guessed. The problem is when the op see children and switch off the connection to avoid responsibility. Somehow I doubt AI giving a flying fuck about children, pets, or literature. It knows about targets, decoys, and obstacles. Read about the funeral one post above.
(Score: 1) by NotSanguine on Sunday November 16 2014, @09:37AM
Yeah. Sorry, this whole SkyNet/AI takes over the world crap is just that. Presumably these autonomous weapons will use a variety of sensors to identify friend from foe (perhaps coded signals to identify friendly targets or spectrographic profiles) for which countermeasures will be developed. How effective the autonomous weapons or any countermeasures is an open question. But i do know that while life does sometimes imitate art, can't we be at least slightly realistic when imagining it?
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 3, Interesting) by wantkitteh on Sunday November 16 2014, @03:13PM
The people who use these machines have no respect for the value of life either, and not in a Milgram Experiment transference of responsibility way either. They have no interest in avoiding collateral damage, there are documented cases of drone operations deliberately killing innocents, then following up with another strike to the same place half an hour later under the assumption that anyone who attends the scene to help is therefore a militant.
The most tenuous information can be used to launch a deadly strike - former CIA/NSA director Michael Hayden said it flat out: "We kill people based on metadata". [rt.com] Anyone worried about a drone controlled by a computer that picks and prosecutes targets, the humans already doing this are already devoid of any respect for human life and take it away with reckless abandon, without due process or consideration of law or culpability, without humanity. They aren't soldiers or intelligence operatives any more, they're state-sponsored serial killers.
(Score: 0) by Anonymous Coward on Sunday November 16 2014, @05:31PM
They aren't soldiers or intelligence operatives any more, they're state-sponsored serial killers.
Isn't that the very definition of soldier?