US has 'moral imperative' to develop AI weapons, says panel:
The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence (AI) software, a government-appointed panel has said in a draft report for Congress.
The panel, led by former Google chief executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world’s biggest military power should consider AI for national security and technological advancement.
Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification.
“It is a moral imperative to at least pursue this hypothesis,” he said.
[...] Mary Wareham, coordinator of the eight-year Campaign to Stop Killer Robots, said the commission’s “focus on the need to compete with similar investments made by China and Russia … only serves to encourage arms races.”
More Info:
(Score: 2) by Grishnakh on Wednesday January 27 2021, @07:43PM
1. There's a sci-fi short movie on YouTube called "Slaughterbots" [youtube.com] that directly discusses this threat. I highly recommend everyone here to watch it; it's actually pretty disturbing.
2. What happens when the AI either malfunctions, or becomes intelligent enough that it decides it doesn't need us anymore, and turns against humans? There's a Star Trek TNG episode about this called "The Arsenal of Freedom" (this addresses the malfunction scenario), plus of course there's the "Terminator" movies (which address the AI-achieving-sentience scenario).