Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.
(Score: 3, Insightful) by Anonymous Coward on Tuesday August 18 2015, @06:36AM
The previous generation of autonomous weapons have already been outlawed. They were called land mines.
So, the complete opposite of drones? Sorry, I don't believe this will happen, unless the military get absolutely no say in this.
(Score: 1) by Ethanol-fueled on Tuesday August 18 2015, @06:44AM
Yeah but when you have stuff like that they can hack it and turn it against you.
Didn't you watch Solid Snake 4 on Youtube? Good movie.
(Score: 0, Touché) by Anonymous Coward on Tuesday August 18 2015, @06:54AM
Dude! Nobody fucking remembers what a landmine is. War is Good. War kills Bad Guys. If you diss us for killing Bad Guys, that makes you a Bad Guy. Greets from Prez Bama! You gonna get fucked up, Bad Guy!
(Score: 2) by davester666 on Tuesday August 18 2015, @08:42AM
There is a new generation of automated weapons in use: automated [as in, not directly human controlled] machine guns on the North/South Korean border [on the South side].
And given that there is no effective way to tell at any distance the difference between a 'combatant' and a 'noncombatant', I guess by 'possibility' they mean, "we might not kill everyone in sight".
And given that there are a significant number of wealthy sociopaths involved in running countries, independent military contractor organizations, and weapons creation, besides the sociopaths in virtually all military's, there is approximately zero chance that, if the first A.I. isn't directly created by one of them, it will be stolen/given to them to immediately incorporation in as many weapons systems as possible ASAP.
(Score: 1) by Francis on Tuesday August 18 2015, @01:56PM
Pretty much. And just look at the mess that mines did, those are still killing and maiming people every year even though they've been banned by most countries.
An automated gun that's mounted is a bit better, as in you know where they are, but allowing them to move about and make their own decisions isn't something that a decent person would be OK with. The applications that it's designed for are mostly things we shouldn't be encouraging in the first place. It's a way of rich countries being able to get away with things that poor countries can't afford to get away with. There will be an increase in lives lost, but because they're lives on the other side, that's OK, because they clearly don't deserve to live.
We should be moving into an era where fewer people are dieing in these small dick contests, but we keep creating bigger and better ways of blowing each other up without considering why. We wouldn't have ever needed a lot of this crap if people hadn't created the previous generation. Muskets would have done just fine if nobody had bothered to invent shells.
(Score: 0) by Anonymous Coward on Wednesday August 19 2015, @02:49PM
(Score: 0) by Anonymous Coward on Wednesday August 19 2015, @06:52PM
We wouldn't have ever needed a lot of this crap if people hadn't created the previous generation.
Yes, none of us would be here.
(Score: 0) by Anonymous Coward on Thursday August 20 2015, @02:09AM
"The previous generation of autonomous weapons have already been outlawed. They were called land mines."
Might want to tell that to the people in Iraq and Afghanistan. Oh sure they call them by a different name, IED in this case, but at the end of the day they are using landmines to great effect.
Chemical and Biological weapons were easy to outlaw. They were temperamental at the best of times and as dangerous to the person using them as the person they were being used against.
Nukes are avoided because using them escalates a conflict to a point that politicians don't want to go to.
Landmines are cheap and highly effective. Yea you can "outlaw" them, but the moment someone feels the need for them they're gonna have them rolling off the assembly line in two weeks at most.
The ban is bad in that once someone feels the need for them and gives the finger to whatever treaty, they are probably gonna go for cheaper persistent mines rather than mines that deactivate. Instead, it should have "banned" persistent mines and encouraged mines that deactivate, best method I have seen is simply require a battery to detonate and let the battery life be the limiting factor.
Such a blanket feel-good measure is certainly going to be broken sometime in the future and we will be back to the same problem we had before, persistent landmines.
(Score: 5, Insightful) by VanderDecken on Tuesday August 18 2015, @06:57AM
As both a soldier and a computing scientist, autonomous weapon systems scare the shit out of me. There are so many ways this can go wrong it's not even funny. If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences and not the "oops" of someone trying to meet a deadline.
The two most common elements in the universe are hydrogen and stupidity.
(Score: 2, Interesting) by Ethanol-fueled on Tuesday August 18 2015, @07:26AM
Both can and do happen at once.
Reminds me of the story my Gramma used to tell me -- she lit her hair on fire during a horrible cigarette accident back when it was fashionable to smoke. She was lucky enough to live on base housing on a B-29 base, and since the B-29 was new at the time the base had very experienced and skilled burn-treatment capabilities.
(Score: 1, Informative) by Anonymous Coward on Tuesday August 18 2015, @08:36AM
> If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences
The more we take the human out of the loop, the lower the threshold for killing becomes. Full automation just makes it easier to shrug off moral responsibility for slaughtering people.
(Score: 2, Insightful) by NezSez on Tuesday August 18 2015, @05:31PM
Stanislov Petrov, hero of humanity and reason:
https://en.wikipedia.org/wiki/Stanislav_Petrov [wikipedia.org]
One of several incidents of a single human, at great risk to his personal life, career and those of his family and friends, intervening in a military process thereby preventing a thermonuclear war between USA and Russia in September 26, 1983. He is still alive FTR.
He, against reprisals and punishment for disobedience and fear of starting a war etc, prevented an retaliatory nuclear missile launch that would have based on erroneous reports of automated systems which detected missile launches (5 of them) from USA territories and aimed at Soviet Russian territory. It was later determined that the systems mistakenly detected reflected sunlight off clouds as missiles (automated process), and military protocol left the decision to actually fire with the operators on duty, subject to standard military procedures (automation of a different kind). He doubted the detection systems findings (for various reasons) and prevented his military compatriots from automatically responding with their own launches, which was automated military procedure/protocol.
My point is that any type of "automation" may be dangerous, or beneficial from any given perspective regardless of it involving machines or humans (which can be viewed as biological machines). Automation is reductionism (reducing processing requirements, or energy wasted by movement, etc etc) and this tries to discard unneeded information. The flexibility, robustness, of the "control systems" (as in the mathematical control theory sense, see https://en.wikipedia.org/wiki/Control_theory [wikipedia.org], where "feedback" is important), whether machine or biological, is the critical factor, and sometimes how that system deals with information that may be discarded during the process of simplification altering feedback loops, which are important to the system's future behavoir.
A good, comprehensive, robust, precise, quick, and accurate control system (say like Ian M. Bank's A.I. "Minds", or Data from Stark Trek) can be as good as or better than a bad biological system (think Inspector Clouseau); OTOH an inaccurate, slow, non-comprehensive (i.e. without good "coverage" of significant variables/terms in the given domain problem) sucks regardless of the medium it is implemented in (machined parts vs. organic natural biological parts)... think of any overly large or complex bureaucratic organization (which has errors with both it's biological and machine elements).
Automation for some problems is just plain tough no matter how the solution is implemented, and there will always be such troublesome problems because the core issues are at a fundamental level of the universe(s) as we currently know it.
Interesting questions:
Would you elect IBM's Watson, after it had been trained/learned from all recorded data of human history and politics, over Donald Trump or Hillary Clinton (or any other human political aspirant) as Potus?
Who would be best at integrating feedback in future decisions?
Could Trump trump a positronic brain?
No Sig to see here, move along, move along...
(Score: 2) by Anne Nonymous on Tuesday August 18 2015, @02:27PM
> If lives are to be taken, let's make it a conscious decision of someone willing to live with the consequences
Or we could just blame the IT department.
(Score: 5, Insightful) by pkrasimirov on Tuesday August 18 2015, @07:15AM
> Robot Weapons: What’s the Harm?
People. People are harmed.
(Score: 0, Touché) by Anonymous Coward on Tuesday August 18 2015, @07:34AM
So no property damage then? Sounds like a win.
People are replaceable. Have you seen how often they fuck?
(Score: 2) by FatPhil on Tuesday August 18 2015, @07:43AM
If these things are going to malfunction, they'll malfunction most often where they are most often found, which is while the "good" guys are working on them. Self-solving problem.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 4, Insightful) by pkrasimirov on Tuesday August 18 2015, @08:26AM
Street gangs won't pull the trigger to a baby. There is less empathy in Africa but still there is always some.
Drones are killing machines. Literally. Targets are discriminated, true, but everybody is a target, just with different priority. If you are in sight you are on the list.
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @08:29AM
> Street gangs, what's the harm? people, people are harmed. yup, but most of the people who are harmed are in street gangs. Self-solving problem.
Which is why there are no street gangs any more, the problem totally solved itself.
> If these things are going to malfunction,
No serious objection is about malfunctions. The problem with these system is that they will function exactly as designed.
(Score: 3, Funny) by c0lo on Tuesday August 18 2015, @08:00AM
Just because you spell ISIS in a peculiar mode, it won't stop the smart bomb to carry what it thinks as its mission.
(and will stop neither NSA or ASIO to look closer to this message in the search for a hidden meaning; chilax guys, it's really just a pun-y way to signal a typo).
https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
(Score: 3, Touché) by Bill Evans on Tuesday August 18 2015, @09:34AM
Remember this?
(Score: 2) by Hairyfeet on Tuesday August 18 2015, @09:58AM
Where they had a killbot with IIRC a 50 cal mounted on the thing and it freaked out and started spraying towards the audience? I would say THAT is the risk, as a human operator may fuck up and make a bad call but they are not gonna have a circuit fry and suddenly decide to go ED209 [youtube.com] on your ass.
ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
(Score: 1, Insightful) by Anonymous Coward on Tuesday August 18 2015, @10:14AM
That is a one-off risk. Makes for great video, but one-offs aren't a serious problem.
The serious problem here is the removal of human responsibility from the equation. Automating death makes it so much easier to kill people under questionable circumstances. Not by technical malfunction -- on purpose. We put a lot of effort into dehumanizing the enemy so troops can more easily kill the enemy without any moral qualms. AI weapons don't care about the humanity of the people they kill, not one iota. They don't second guess. They don't question whether an order is illegal. They don't care if they are defending or attacking the constitution. They just kill.
(Score: 3, Insightful) by Thexalon on Tuesday August 18 2015, @01:07PM
Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @05:54PM
> Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.
No more so than the family and friends of the 30,000 people who die in auto accidents every year. I don't see you sticking up for any of them.
(Score: 2) by chewbacon on Tuesday August 18 2015, @10:56AM
With all the articles we read here or at the green site about programming bugs and shitty programmers, do you really have to ask? When I think about robots discriminating between bad guys and innocent bystanders I can't help but think about Hotmail's spam filter from 10 years ago that was seemingly coded backwards as I spent a lot of time marking spam as such and finding legit mail in my spam box. Yeah, that's the harm.
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @02:02PM
Having read the article....
They make a *very* good point. Autonomous weapons will come from pedestrian sorts of things. It is not a big stretch to turn a self driving car into a self driving bomb. The first set of autonomous weapons like this will come out of things everyone wants. The next leap after that will be the AI deciding it needs to use a weapon all by itself. But 'gen 1' will be human initiated.
(Score: 2) by Freeman on Tuesday August 18 2015, @05:21PM
What's more likely? Using a remote control car ($500 tops) to deliver a bomb or using a self driving car ($10,000 minimum and possibly more) to deliver a bomb?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Wednesday August 19 2015, @02:53PM
(Score: 2) by tangomargarine on Tuesday August 18 2015, @02:21PM
My opposition to drones is more philosophical than pragmatic.
Using drones instead of F-22s or other manned aircraft is presumably a hell of a lot cheaper, and you don't put the pilot in danger--which is a hidden problem. One of the biggest reasons people oppose specific wars is when they're being asked to put their lives on the line to fight somebody they don't really care about.
Wasn't that Vietnam, basically? On the one side you had the determined Communist north, with all their entrenched guerilla networks and hiding in the jungle. On the other side, you had the southern government, which was riddled with corruption and not really able to defend themselves militarily. And all this halfway around the world, in former French Indochina. I'm not aware that the country itself had any real significance per se, it was just the "domino" doctrine that the U.S. didn't want any one country to fall to communism. So as the war dragged on the protests got worse and worse.
If we no longer send our young men (and women) into combat, they and their families lose part of the reason to oppose the war. The military hardware companies, on the other hand, are more than happy to crank out as many drones as the generals can want.
And finally, in the same vein it becomes easier to start new wars because "we're not risking anybody's lives"...well, other than the people who are being bombed by a neverending stream of robots.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @05:50PM
> If we no longer send our young men (and women) into combat, they and their families lose part of the reason to oppose the war.
That is why they eliminated the draft. It used to be that every mother in the country had something to lose. Now it only a small fraction of mothers and that fraction is primarily made up of those with the least political agency in our society. Drones are just a small step compared to that change.
(Score: -1, Troll) by Anonymous Coward on Tuesday August 18 2015, @04:43PM
Actually, I'm fine with AI weapons making targeting decisions. Humans in the midst of combat are panicky and make rash decisions. Robots with AI capabilities can afford to be more cautious as they don't have to worry about the possibility of death while under attack.
While I take your point about being confronted with a confused AI-enabled robot who thinks your "neutralization" is it's prime mission, that is like trying to catch the train long after it has left the station. The time to make that argument is long before you are confronted by a bloodthirsty AI-enabled robot. In other words, the best way to avoid becoming collateral damage in a war is to stop that war from happening in the first place. Barring that, if you should find yourself in the middle of a war zone, you should avoid engaging in activities which may cause an AI-enabled robot to confuse you with a legitimate target. Yeah, I know. Easier said than done, but that is the raw truth of the matter.
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @11:13PM
This has been written about in The Butlerian Jihad, i believe...
Personally, im all for automated armies.
Think about it.
Where theres automated war, theres automated revolution as well ^_^
These people, as a culture, truly want to get killed by their creations... _Why_ should anyone stop them?
(Score: 1) by evilcam on Wednesday August 19 2015, @03:23AM
"We've had to endure much, you and I, but soon there will be order again, a new age. Aquinas spoke of the mythical City on the Hill. Soon that city will be a reality, and we will be crowned its kings. Or better than kings. Gods."
- Deus Ex (2000)