Opposition to the creation of autonomous robot weapons have been the subject of discussion [soylentnews.org] here [soylentnews.org] recently [soylentnews.org]. The New York Times has added another voice to the chorus with this article [nytimes.com]:
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn’t stand up under scrutiny. However high-tech those systems are in design, in their application they are “dumb” — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bom [wikipedia.org]b to try to convince it that it should not carry out what it thinks is is mission [wikia.com] because of an error.