Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Tuesday August 18 2015, @06:23AM   Printer-friendly
from the skynet-is-beginning dept.

Opposition to the creation of autonomous robot weapons have been the subject of discussion here recently. The New York Times has added another voice to the chorus with this article:

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Tuesday August 18 2015, @04:43PM

    by Anonymous Coward on Tuesday August 18 2015, @04:43PM (#224476)

    Personally, I dislike the idea of using AI in weapons to make targeting decisions.

    Actually, I'm fine with AI weapons making targeting decisions. Humans in the midst of combat are panicky and make rash decisions. Robots with AI capabilities can afford to be more cautious as they don't have to worry about the possibility of death while under attack.

    I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.

    While I take your point about being confronted with a confused AI-enabled robot who thinks your "neutralization" is it's prime mission, that is like trying to catch the train long after it has left the station. The time to make that argument is long before you are confronted by a bloodthirsty AI-enabled robot. In other words, the best way to avoid becoming collateral damage in a war is to stop that war from happening in the first place. Barring that, if you should find yourself in the middle of a war zone, you should avoid engaging in activities which may cause an AI-enabled robot to confuse you with a legitimate target. Yeah, I know. Easier said than done, but that is the raw truth of the matter.

    Starting Score:    0  points
    Moderation   -1  
       Troll=1, Total=1
    Extra 'Troll' Modifier   0  

    Total Score:   -1