The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.
"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.
Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?
Related Stories
Turkey aims to produce unmanned tanks: Erdoğan
Turkey is targeting the production of unmanned tanks for its armed forces, President Recep Tayyip Erdoğan has stated. "We will carry it a step further [after domestically produced unmanned aerial vehicles] ... We should reach the ability to produce unmanned tanks as well. We will do it," Erdoğan said at a meeting held at the presidential complex in Ankara on Feb. 21.
Five Turkish soldiers were recently killed in a tank near the Sheikh Haruz area of Syria's Afrin district, where Turkey has been carrying on a military operation against the People's Protection Units (YPG) since Jan. 20.
[...] The Turkish president has repeatedly criticized certain foreign countries for allegedly being reluctant to sell unmanned aerial vehicles, armed or unarmed, stressing that unmanned systems could decrease casualties.
Also at ABC.
Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban
South Korean university boycotted over 'killer robots'
Leading AI experts have boycotted a South Korean university over a partnership with weapons manufacturer Hanwha Systems. More than 50 AI researchers from 30 countries signed a letter expressing concern about its plans to develop artificial intelligence for weapons. In response, the university said it would not be developing "autonomous lethal weapons". The boycott comes ahead of a UN meeting to discuss killer robots.
Shin Sung-chul, president of the Korea Advanced Institute of Science and Technology (Kaist), said: "I reaffirm once again that Kaist will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control. Kaist is significantly aware of ethical concerns in the application of all technologies including artificial intelligence." He went on to explain that the university's project was centred on developing algorithms for "efficient logistical systems, unmanned navigation and aviation training systems".
Also at The Guardian and CNN.
Related: U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban
Is Ethical A.I. Even Possible?
When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.
"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.
"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."
Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.
Related:
U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban
Robot Weapons: What's the Harm?
The UK Government Urged to Establish an Artificial Intelligence Ethics Board
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
South Korea's KAIST University Boycotted Over Alleged "Killer Robot" Partnership
About a Dozen Google Employees Have Resigned Over Project Maven
Google Drafting Ethics Policy for its Involvement in Military Projects
Google Will Not Continue Project Maven After Contract Expires in 2019
Uproar at Google after News of Censored China Search App Breaks
"Senior Google Scientist" Resigns over Chinese Search Engine Censorship Project
Google Suppresses Internal Memo About China Censorship; Eric Schmidt Predicts Internet Split
Leaked Transcript Contradicts Google's Denials About Censored Chinese Search Engine
Senators Demand Answers About Google+ Breach; Project Dragonfly Undermines Google's Neutrality
Google's Secret China Project "Effectively Ended" After Internal Confrontation
Microsoft Misrepresented HoloLens 2 Field of View, Faces Backlash for Military Contract
(Score: 2) by EvilJim on Thursday May 15 2014, @04:42AM
is this exactly what the story on autonomous cars murdering you was talking about? I'm sure there will be some crossover of technologies if both go ahead.
(Score: 2) by davester666 on Thursday May 15 2014, @06:06AM
the car will only murder you if it thinks it's on a path to destruction [crazy, suicidal maybe?], and decides that you are the cheapest person it will definitely have to kill.
(Score: 1) by RaffArundel on Thursday May 15 2014, @02:15PM
Yes, the second example in the linked article was an autonomous weapon.
(Score: 2) by aristarchus on Thursday May 15 2014, @05:05AM
These are victim triggered weapons, much the same as a shotgun aimed at a door with a string on the trigger. Only the string is a very expensive piece of software.
(Score: 2) by frojack on Thursday May 15 2014, @07:20AM
Not what is being talked about here. That area is covered by Mine fields, and the UN hasn't been all that successful getting those outlawed yet.
We are talking about unmanned vehicles, aircraft usually, that patrol an area and attack anything that they are programmed to attack without a human anywhere in the control loop. What could possibly go wrong with that?
These make great area denial weapons, if you don't mind gunning down the stray homeless guy wandering around.
It used to be thought that this could only be accomplished by advanced countries, like the US or Israel with a big budgets, but advances in cheap drones, and heat seeking technology make it pretty much within the capabilities of any technically competent nerd.
I suspect the UN will outlaw them, and countries and terrorist organizations will build them anyway, as un-armed intelligence gathering vehicles. The weapons payload will come along later. The terrorists won't bother with the fiction, and they will go direct to plan b.
No, you are mistaken. I've always had this sig.
(Score: 2) by aristarchus on Thursday May 15 2014, @07:54AM
Yes it is! The Ottawa Treaty went into effect in 1999. Some notable rogue nations have refused to sign it, like North Korea, but still international law.
And what is the difference between a land mine and an autonomous targeting machine, anyway? Sure, the mine only decides to explode if someone steps on it, and the killer robot only decides to terminate if the detected object meets some pre-determined profile as determined by meta-data, like stepping on the killer robot.
As for asymmetry, yes, third world nations will never have the capability to produce such amazing instruments of death! Only advanced nations, or the machines, produced, by advanced nations, .. . . oh, shit! You have heard of "Little Big Horn"?
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @02:15PM
A landmine doesn't move. As long as nobody was there who could have planed one, you can assume that a place which was safe before still is safe. An autonomous targeting machine may target you at a place which was safe before.
A land mine explodes only once, then it's dead. So if some unfortunate person steps on it, well, bad for him, but afterwards it's gone; other people can now safely walk to that place (e.g. to help in case the victim is not killed, but severely injured). An autonomous targeting machine will continue to target, so anyone helping the victim will also be in danger to be injured or killed.
A landmine only covers the place where it sits. Unless you step directly on it, it won't kill you. Autonomous robots cover a complete area. If you are anywhere in that area, you're at danger. It doesn't even help you if you don't move at all (unless the drone has been programmed to only shoot on moving targets).
The very local triggering of landmines especially means that if you know where it is, experts can easily get close to it, in order to disarm it. Disarming an autonomous killing robot won't be easy, even if you know exactly where it is.
(Score: 1) by hoochiecoochieman on Thursday May 15 2014, @02:49PM
That's wrong. Of course it does. [bbc.co.uk]
(Score: 2) by frojack on Thursday May 15 2014, @05:58PM
Oddball exception simply proves the rule.
Land-minds don't hunt you down and follow you home.
I can't believe this discussion is actually going on, and that anyone intelligent enough to post on SN can't understand the difference between a land mine and an autonomous attack drone [rt.com].
You don't have to imagine Terminator style walking robots. The current fleet of remotely operated unmanned drones in US, French, and British arsenals are one software upload away from this capability TODAY.
(US Air force General says 2047 [engadget.com] but that's just public PR. They are probably flying these software packages today that do everything but the actual shoot.
Today, the US always has a human (or two) [truenorthperspective.com] pulling the trigger at Creech [af.mil]. What they don't tell you is how many drones each pilot can manage. Its not one to one.
In the future these very same drones (or better ones) can be instructed to loiter over Kandahar province, track and attack anything moving in a certain direction in a specific valley. They can follow an individual to a house in the middle of a city, and fly a missile into the house without injuring the neighbor. Today there is always an operator making the decision (we think). That can't be guaranteed in the future.
It won't be as simplistic and obvious as See Tank, Shoot tank.
It will be more like fly to area, see any SUV or pickup follow it out of town and fire missile without regard to who might be in it, or see a group of men, follow them to house, fire missile, and fly home. No human in any part of that decision, other than the guy who pushes it out of the hanger doors.
No, you are mistaken. I've always had this sig.
(Score: 1) by hoochiecoochieman on Thursday May 15 2014, @06:20PM
It's not oddball, according to news I've read, it's quite common (and deadly).
I usually don't fear land-minds, because earthworms have very little brains and are very slow to follow me, anyway.
Please enlighten me: What discussion are you talking about, and where in my text I claimed that there's no "difference between a land mine and an autonomous attack drone".
(Score: 2) by tibman on Thursday May 15 2014, @06:09PM
There is a big difference between landmines and robots/drones. Landmines are placed where they cannot be seen and continue to operate for decades or longer (long after they are needed). Landmines are dangerous to remove and usually are just blown up instead. A drone could just fly or drive home.
You are right though, almost everyone has agreed that landmines are terrible and illegal. The US did not sign the treaty because they find landmines to be extremely effective. However they typically only have landmines with short expiration dates on them. Placed mines are temporary and self-destruct within days. Anti-vehicle mines can be permanent but i have never heard of their use lately. It's more than possible that the Korean DMZ is mined? But that is probably a perfect use of mines (until it is no longer needed).
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @05:16AM
Gonna fill em with so much jizz before I die....
(Score: 1, Insightful) by Anonymous Coward on Thursday May 15 2014, @05:19AM
Fools! You lost your right to have rights when you voted for 16 years of Bushbama. You voted for tyranny, now enjoy it!
(Score: 2, Funny) by Anonymous Coward on Thursday May 15 2014, @05:29AM
This story is essentially a dupe from last week:3 27222 [soylentnews.org]
https://soylentnews.org/article.pl?sid=14/05/09/1
Or we could do it this way:
Terminator: UN Planing to Debate Killer Robots Next Week
Terminator Judgment Day: UN Starts to Debate Killer Robots
Terminator Rise of the Machines: UN Debating Killer Robots
Terminator Salvation: UN Finishes Debating Killer Robots
Terminator Genesis: Last Week the UN Debated Killer Robots
And then two season series-
Terminator The Sarah Connor Chronicles: UN Commissions 2 Year Study of Killer Robots
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @09:33AM
Kudos Sir, for finding the dupe and being fucking funny about it! It's shit like this makes me come back to SN.
(Score: 2, Interesting) by tftp on Thursday May 15 2014, @05:43AM
Are we too far down the rabbit hole?
Yes, certainly. Citizenry has no say in this matter; and generals are very interested in having fighting robots. Robots are infinitely cheap, compared to soldiers. Flying armed robots are already deployed; what would make a land-walking robot fundamentally different?
As matter of fact, robots are the only way the US Army can continue to develop. Human soldiers have severe physical limitations. They are given better equipment, but this is asymptotically approaching a certain limit. Robots can run 100 miles, see with 100 infrared HD cameras, shoot ten guns in all directions, and if they are destroyed it's just hardware. Mass produced robots can be very, very cheap (comparable to a car.) Robots do not need to be individually programmed in boot camps; they are always ready; and, most importantly, they don't ask questions, and they cannot be accused of war crimes.
Robots are the natural way to go, as long as the country is fixated on developing its military might. Should it focus on the military that much? From one POV, no, it shouldn't. From another POV, military is one of few things that maintain status quo. It's hard to be the top dog - you have to be in every fight for dominance, even though your opponents rotate to lick their wounds.
(Score: 4, Insightful) by edIII on Thursday May 15 2014, @07:19AM
I find the whole thing ludicrous, and when combined with the environment, I have no pity or sorrow regarding our extinction. We just don't even deserve another couple hundred years at this rate.
It makes sense why government wants this, as government isn't government, at least not by the definition I learned in civics classes. Government is merely a corrupt framework for the Orwellian pigs to control us so they get to live in the nice farmhouse, and we get to suffer and be sent to the glue factory.
Finally removing the human element from "war" is what is desirable here. That has nothing to do with costs, logistics, or saving soldier's lives. A robot has no heart, no feelings, no morals, no concerns about the afterlife, just nothing. There is no programming that would result in "no" as an output when instructed to eviscerate a little girl above her father to strike fear into him for his remaining children.
Considering the thousands of little women and children we have killed with the Chair Force, and the notable psychological problems that have arisen with the operators, it makes cold and logical sense that the ruling elites want something more reliable, something that only technology can provide.
It's a huge effing game changing mistake to create any kind of robot capable of autonomous actions without extremely strong protective measures. This can be passive enough by simply making it precluded physically with fail safes and graceful failure, or active co-processors and redundant circuits constantly applying the 3 Laws.
When you create such a magnificent weapon of death and mass manufacture it, don't complain later when you lose control over it.
It's not even up for discussion in a sane and rational society. Of COURSE you don't build fucking robots capable of walking around, targeting people, and killing them. How ridiculously fucking terrifying can we possibly be to build a machine with the sole purpose of identifying and killing people with direct human control? That's beyond insane. It's Silence of The Lambs meets Apocalypse Now set in the world of 8MM.
Humanity is just getting sick right now. If there was really a God he would have pulled the plug already. Sick. Sick. SICK.
Technically, lunchtime is at any moment. It's just a wave function.
(Score: 1, Insightful) by tftp on Thursday May 15 2014, @07:42AM
It makes sense why government wants this, as government isn't government, at least not by the definition I learned in civics classes. Government is merely a corrupt framework for the Orwellian pigs to control us so they get to live in the nice farmhouse, and we get to suffer and be sent to the glue factory.
The cake ^W civic classes were a lie - or, say, a nice theory that has nothing to do with reality. All governments in known history operated and continue to operate as privileged gangs that use peons as expendable cannon fodder in war and in peace. Democracy only allows the voter to pick Twiddledee vs. Twiddledoom. Domination over others is deep in human genes; and woe betide those who don't want to exploit others - they will be exploited themselves.
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @02:26PM
Wow, modded to +4 insightful. I'd mod you to +5 if I had points. I was going to say the same thing a few weeks ago when everyone was like omg Crimea.
Part of me hopes that humanity wipes itself out in a full nuclear exchange. No, when I think about the technologies humanity is beginning to develop, especially in biotech, I'm frankly frightened to be on the same rock as them. A full nuclear exchange may be the best thing to hope for.
*communications interrupted*
Earth abides. The fallout would eventually decay. Life may evolve again. Apologies to Sagan.
There's no benevolent Overlords to save this species, no Karellan to ride in and save us from ourselves.
I often think that if intelligence were to evolve again on Earth, it would find evidence of a previous intelligence that nearly extinguished all life on earth and perhaps it would meditate on that and learn somehow to be better.
Or it could be the case that evolving intelligent life is impossible without the evolutionary baggage that leads to such abominations. It would explain why there seems to be no evidence for intelligent life elsewhere.
Of course they want to build Terminators. It was inevitable. Sick, sick, sick bastards.
Humanity is at a point where it could use its technology to build a utopia. Instead, it uses its technology to find better ways to kill and oppress.
Anyone wants to know where the Borg or the Necromongers come from? You're looking at the species that will make it happen.
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @02:37PM
Assuming you're speaking about Asimov's three laws, nobody would program those into a killer robot, at least not unchanged, because they would basically turn the killer robot into a non-killer robot.
Now if you change the order (and corresponding dependence) of the laws, then you might get something the military would possibly accept:
1. A military robot has to follow all orders of the owner.
2. A military robot may not harm humans, unless this would violate the first law.
3. A military robot has to protect its existence, unless this would violate the first or second law.
However, I suspect they would demand further reordering to:
1. A military robot has to follow all orders of the owner.
2. A military robot has to protect its existence, unless this would violate the first law.
3. A military robot may not harm humans, unless this would violate the first or second law.
OK, maybe a bit too dangerous, so make it four laws:
1. A military robot has to follow all orders of the owner.
2. A military robot may not harm members of the own military, unless this would violate the first law.
3. A military robot has to protect its existence, unless this would violate the first or second law.
4. A military robot may not harm humans, unless this would violate one of the first three laws.
Add to that a sufficiently wide interpretation of "harm members of the own military", and I think those would be rules the military might accept. And the public would be told "we implement Asimov's rules, and we even extended them with another protection clause!" And few people would notice just how much those laws would have been subverted.
(Note how especially the addition of the extra law would subvert the rules, since the now-fourth law is conditioned on it ...)
(Score: 2) by khallow on Thursday May 15 2014, @07:31PM
I think people would have to be pretty stupid even by the extremely low standards of this thread to not figure out that fourth place is a lot lower than first place - especially after a few demonstrations of these rules in practice.
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @08:22AM
That's the pro-war way to look at it. Lets turn it around to:
"Killing machines which take the next step from our current land mines, which stay where they are, to killing machines moving around on their own".
We banned land mines that stay where they are. Now, what do we do with these unpredictable land mines?
(Score: 2) by zeigerpuppy on Thursday May 15 2014, @08:27AM
Robocop 2014 has some surprisingly cogent satirical commentary on this.
The powers that claim to be democratic are rapidly moving to a fusion of militarism with oligarchy. Autonomous citizen controlling machines are the natural evolution of this idea.
Strange how people can be so vitriolic about amendment rights and then forget what they were for!
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @08:52AM
In Soviet USA, Gun Has Arms!
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @07:22PM
It appears everyone but the robots have been invited to these discussions. They're not going to like that....
(Score: 1) by O3K on Thursday May 15 2014, @08:16PM
...you know the rest.
(Score: 1) by AndyCanfield on Friday May 16 2014, @03:02AM
I live in southeast Asia. Land mines left over from previous wars are a continuing problem, killing many people every year. What's the difference? Robots move around; landmines stay in one place. But they are the same in that they are lethal decades after the army has left the area. The UN should include landmines in their discussion.