from the the-skynet-is-falling dept.
Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a "military artificial intelligence arms race" and calling for a ban on "offensive autonomous weapons".
The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla's Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.
The letter states: "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms."
So, spell it out for me, Einstein, are we looking at a Terminator future or a Matrix future?
While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft's Bill Gates said he was "concerned about super intelligence," while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun.
takyon: Counterpoint - Musk, Hawking, Woz: Ban KILLER ROBOTS before WE ALL DIE
The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.
The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn't stand up under scrutiny. However high-tech those systems are in design, in their application they are "dumb" — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.
A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).
Personally, I dislike the idea of using AI in weapons to make targeting decisions. I would hate to have to argue with a smart bomb to try to convince it that it should not carry out what it thinks is is mission because of an error.
The White House will be holding four public discussions in order to evaluate the potential benefits and risks of artificial intelligence:
The Obama administration says it wants everyone to take a closer look at artificial intelligence with a series of public discussions.
The workshops will examine if AI will suck jobs out of the economy or add to it, how such systems can be controlled legally and technically, and whether or not such smarter computers can be used as a social good. Deputy Chief Technology Officer Ed Felton announced on Tuesday that the White House will be creating an artificial intelligence and machine learning subcomittee at the National Science and Technology Council (NSTC) and setting up a series of four events designed to consider both artificial intelligence and machine learning.
[...] The special events will be held between May 24 and July 7, will take place in Seattle, Pittsburgh, Washington DC, and New York.