Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.
(Score: 2) by Hairyfeet on Tuesday August 18 2015, @09:58AM
Where they had a killbot with IIRC a 50 cal mounted on the thing and it freaked out and started spraying towards the audience? I would say THAT is the risk, as a human operator may fuck up and make a bad call but they are not gonna have a circuit fry and suddenly decide to go ED209 [youtube.com] on your ass.
ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
(Score: 1, Insightful) by Anonymous Coward on Tuesday August 18 2015, @10:14AM
That is a one-off risk. Makes for great video, but one-offs aren't a serious problem.
The serious problem here is the removal of human responsibility from the equation. Automating death makes it so much easier to kill people under questionable circumstances. Not by technical malfunction -- on purpose. We put a lot of effort into dehumanizing the enemy so troops can more easily kill the enemy without any moral qualms. AI weapons don't care about the humanity of the people they kill, not one iota. They don't second guess. They don't question whether an order is illegal. They don't care if they are defending or attacking the constitution. They just kill.
(Score: 3, Insightful) by Thexalon on Tuesday August 18 2015, @01:07PM
Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 0) by Anonymous Coward on Tuesday August 18 2015, @05:54PM
> Something tells me you wouldn't think that if you were Mr Kinney or his family and friends.
No more so than the family and friends of the 30,000 people who die in auto accidents every year. I don't see you sticking up for any of them.