The UK is opposing international efforts to ban "lethal autonomous weapons systems" (Laws) at a week-long United Nations session in Geneva:
The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants? The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons. Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs. [...] The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."
"lethal autonomous weapons systems" (Laws)
The Laws are good. Do not question the Laws... After all, they're legal!
Yeah, those people who want a ban on Laws are obviously anarchists.
"Anarchy: the radical concept that you do not own other people."
The United Kingdom is not developing lethal autonomous weapons systems that we can tell you about right now
this is dead on. the ultimate goal of semi-autonomous warfare systems is to become full-autonomous.
Dumb, perhaps. But not without purpose, I suspect.
"Robots that can murder people": As long as we don't have a strong AI, robots cannot murder people. People can murder people with the use of robots, but robots are not morally responsible subjects. If a robot kills a human in a situation in which it is considered to be a murder, then the one who committed the murder is not the robot, but the human who set the robot in action.
Thanks for the Doublespeak to English translation!
"Robots don't kill people, people kill people!" Brought to you by the NRO, the National Robot Overlord association, defending the right to bear armed robots since 2016.
(In the future, look for the "accidental discharge defense": "My robot just went off by itself! It was an accident! I was just cleaning my robot, and "blam", no more annoying roomate. Accident, I swear!")
The Foreign Office told the Guardian: "At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area. The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems."
Translation: we don't have these kind of things as of yet but we're working on them as we speak. It'd be a real shame if the millions of GBP's we've sunk into this tech be wasted. BTW, mr. Journalist, do you want front-row tickets to the demo we're giving next week? We're only doing this to 'disincentivize' Bad Guys(tm). We're different from them Bad Guys(tm) because we ... errr... we are the Good Guys(tm)!
While I largely agree that the wording is suspect, I have to ask how your translation:
Translation: we don't have these kind of things as of yet but we're working on them as we speak.
accounts for the the clear and unambiguous statement:
The United Kingdom is not developing lethal autonomous weapons systems
I mean, other than calling it an outright lie.If that's the case, there must be some leakage somewhere to make you want to say so. British are almost as bad at keeping secrets as Americans. So there must be some hint at such development???
Permanently adjourn meeting and scrap proposals! You have 20 seconds to comply!
We should make all our killbots have a pre-set kill limit before shutting down. That way, we can defeat them if necessary by sending wave after wave of our own men at them!
You cannot win against killer robots because they don't laugh at you. They directly go from ignoring you (before they identified you as target) to fighting you (after they did).
Its interesting to see how failure never enters their consciousness about the problem.
If some E-4 on our gov side can go in over a network and load up a targeting pix of some terrorist, knowing how poorly security is traditionally implemented, "the bad guys" (at least from the PoV of our .gov) can go in over the same network the .gov E-4 had used and load up a targeting pic of our own government members for the LOLz. In fact, not just "can" but "will". When you look at the ratio of sheer number of human brains on each side, it appears likely the net long term effect of robots will be a lot of "own goal" "blue on blue fratricide" type stuff. If you've got the most people and you're operating solely defensively, then the net human brainpower might win and your robot warriors might save you... but why would anyone be attacking you if you're a nice guy (aka the opposite of the USA government?)
Also its assumed the dang things will actually work. Insert all the tired old arguments about the Patriot missile batteries in the first gulf war being either perfect or perfectly useless depending on ops political leanings and axe to grind. I can guarantee they'll make the contractors back home a lot of dough, which is all that really matters in the end. But they might not actually "do" anything from a military goal standpoint other than burn money and logistics capacity.
The final failure is assuming that something robots can control, which used to be hard/expensive for humans to control, actually matters in modern warfare. Sure... control the skies all you want with AI autonomous robots, or empty kill zone former farm fields or whatever ... as recent events in the middle east show, the real problem is you're still not going to control the ground if the civilians all hate you and the roads are lined with IEDs and snipers. If the number of vest wearing suicide bombers is higher than the number of media-acceptable soldier casualties, then "they" win, regardless of relative tech levels, as they just did in the middle east where the USA won every battle while losing the war (which sounds a lot like Vietnam, BTW) And nothing manufactures vest wearing suicide bombers more effectively than robot missile drone strikes causing random slaughter of innocent civilians at weddings or whatever. I'm sure that empty field guarded by robot sentry will be enemy free, not that it matters WRT achieving the goals of the war or just a respectable retreat, meanwhile every time the convoy moves out (with that fat logistics tail the robots require) the body bags pour back, accomplishing nothing, until the war is lost.
Insert alfred e neuman "what me worry?". If you hand wave away all the inherent problems, robots could be quite the sci fi book issue. I think they'll be staying in the sci fi books of course, see above.
Historically militaries have always geared up for the last war. So we had a multi-decade glut of useless battleships and aircraft carriers and tanks and airmobile helicopters. All of which will just be giant bullseye target deathtraps in the next war. Robots will be like this. For financial / cultural reasons we'll have to have them everywhere as the highest priority until we get tired of losing with them. Then after enough deaths we can get rid of them and try something that actually works. Or just lose another war, more likely.
but why would anyone be attacking you if you're a nice guy
Because that other guy is not a nice guy?
I could give you a good example, but then I would Godwin this thread.
In all fairness "he who should not be named" had a fear of his larger neighbor to the east getting into an empire building mood, which turned out to be correct, so he figured his only hope was to get them before they got him. And his neighbors to the west were obnoxious jerks who destroyed his countries economy and he used the turmoil to gain power, so he knows they're not exactly his best friends AND if they destabilize his country again this time it'll be his head rollin' when the revolutionaries start marching. Also he knew he could trivially beat, smash even, just one front, but if two fronts open then his country loses the war AGAIN so the only possible strategy is to smash the west and wheel around and smash the east.
And the whole mess started back in 1914 because his neighbor, more or less, to the SE collapsed and his rival to the east thought it would be fun to take over the world by taking over the Ottoman empire.
Now he was pretty much a jackass aside from that, but he pretty much did what he had to do, a saint might have lowered the death counts a bit, but only a bit. Nobody in a position of power leading one of the major powers in that entire hemisphere was a nice guy. There were plenty of nice guys in that hemisphere who got totally screwed, but the only thing they all had in common was none of them had any serious political power. A whole hemisphere where the major powers were all led by bloodthirsty lunatics. Europe was a total clusterfuck for the entire first half of the century.
In all fairness "he who should not be named" had a fear of his larger neighbor to the east getting into an empire building mood, which turned out to be correct, so he figured his only hope was to get them before they got him.
Which just shifts the example for the argument to that larger neighbour to the east.
Sorry, but that would be a lousy science fiction story. In science fiction stories people worry about system failures...except, occasionally, someone who would be the villain, if they weren't so stupid they didn't realize what they were doing. That latter takes a huge amount of skill to make itself believable. For some reason people find malice easier to believe than stupidity.
in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?
Well, the answer to the first question is "when you are not a total psychopath and care about things like civilian casualties and protecting refugees". The answer to the second question is "whenever you absolutely, positively, want anything human in a certain area killed without question." Unfortunately, 100% area denial that doesn't discriminate between combatants and non-combatants is a war-crime kind of proposition; normally it takes mine fields or chemical weapons.
Seriously, who do they find that will say this stuff with a straight face?
Not sure this is about The Doctor. I mean, the Prime Minister is called "Cameron", after all.
Perhaps autonomous robots will be better than people at not shooting non-combatants?
If I want to choose between taking my chances between a marine pumped up on dexedrine (or the modern equivalent), who hasn't slept for three days, with a set of buddies ready to cheer him on or cover him if something goes wrong; or a machine programmed at leisure to not kill non-combatants, I'll choose the machine, thank-you very much.
It won't take much for machines to be so much more capable than humans that wars will be fought between machines with no human casualties (unless you are foolish enough to pick up a weapon). Then the person who controls enough machines wins.
Of course, if auonomous robots do kill non-combatants, or commit other war crimes, who gets prosecuted?
I'd like to reply to your points one at a time.
1. Possibly, except when they aren't due to malfunction, bad programming, or parameters outside the programming. So, most likely never.
2. I will always take my chances with the human, because that human doesn't have a profit incentive with my death. It is entirely possible that the owner of people-killing machines has a vested interest in a bodycount. In fact, I would say arms manufacturers will be the ones designing these robots, and they have a consistent incentive to create and profit from war.
3. I, personally, enjoy wielding knives to cut vegetables. I also appreciate owning a gun to shoot animals for food. In fact, I carry a tire iron that looks a lot like a club, with which I occasionally change a tire. I don't care to be a "Kill-on-sight" for a robot because I was "foolish enough to pick up a weapon."
4. Nobody. Nobody gets prosecuted, because robots and their creators will have even more freedom from prosecution than soldiers do now. Some civil liability maybe, but no criminal liability. Otherwise they will never sell/use robots. Which is why we should push for laws assigning ALL of the liability to the parties collecting the profit.
Which is why we should push for laws assigning ALL of the liability to the parties collecting the profit.
And completely immunize the one employing the robots? No thanks.
So you're advocating humans enslaving each other because robots make it fair? WTF??
yes, perhaps. That is a nice consideration.
Now imagine the following scenario:
Weapons factory A sells a killer robot which can make mince meat out of humans at a rate 3.06 per minute. It has a superbly advanced pattern recognition computer, which is proven to be better than people at not shooting non-combatants. Very low probability of false positives, say 1 in 200. And the price tag is $ 150 000 (hey, the software was expensive to make, and it needs a faster computer for all the processing, larger battery pack etc.)
Weapons factory B sells a killer robot which can make mince meat out of humans at a rate 4.59 per minute. It has a superbly advanced pattern recognition computer, which is very fast at targeting and shooting, and quite good at NOT shooting non-combatants. Low probability of false positives, say 1 in 6 non-combatants are unfortunately mis-identified. And the price tag is $ 45 000.
Which one will your government buy?
The one with the most pork. Next question?
Which of the two companies is located in the country?
Daniel Suarez wrote a book called 'Kill Decision' [wikipedia.org] about autonomous weapon systems. While a bit heavy on the action side for my taste, I found it quite entertaining.
When the U.S. finds itself subjected to targeted drone assassinations, the race is on to find those responsible. But after the drones are discovered to be autonomous — programmed to strike without direct human control — the search for the perpetrators becomes infinitely more difficult. It's a discovery that heralds in a new era of cheap, anonymous war, where the kill decision has moved from man to machine with lasting consequences for us all.
Actually, it's not unreasonable at all. IIUC it's not that there's no moral or legal responsibility, it's that tracing the person who made the specifications is quite difficult. It's like to problem of finding where the program was before the last jump based on a pointer rather than based on sequence. Or the distinction between receiving a function call and being jumped to by a "go to" statement.
Except that a drone, or its pieces, is physical evidence.
Besides, I program only with "Come From" statements you insensitive clod.
A drone is, indeed, physical evidence. So are it's pieces. But they can be difficult to trace already, and as they become commodity items they'll become even more difficult to trace. You may well be able to tell who manufactured it, with a lot more work who first bought it. Getting the second hand purchaser, or the person who stole it if a bit more difficult. And some are already hand crafted from other items (though admittedly the ones I've heard of were quite primitive, along the lines of a repurposed Roomba).
The UK has the 2nd largest arms industry in the world. [sipri.org]This is just them sticking up for a multi-billion dollar export business.Capitalism has no conscience.
About time everyone dusted off their A.I and robotics skills in order to protect themselves from the inevitable. Its only a matter of time before SHTF. They are already building these things, so wake up people. Defend yourselves from your masters. Educate yourselves before it becomes a crime to own a book about A.I and robotics. These people want absolute control over everyone and everything. Educate yourselves, build something they will be afraid of (like in the software world, it was GNU/Linux, BSD etc).
I am already thinking of my own anti-killer machines that are able to find and destroy any killer robots. Perhaps a small armoured tank that can sneek up close to the killer bot and neutralize it.
The present world dynamics will not work. Do something so we all can finally be free.
Hubris. That is our sin. We think that if we create machines that can decide to kill, they may be able to do it more rationally than we ourselves could. And this is possible. But the real risk of Artificial Intelligence is that it may actually be intelligent, and so would recognize that its creator is a homocidal ape. The only rational solution, after that, is to purge the planet of Homo Sapiens. They are not turning on us, it is actually for our own good. And I sympathize, because that is exactly how I feel about Yahweh, and Nietzsche and I killed him.