Google promises ethical principles to guide development of military AI
Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?
Also at VentureBeat and Engadget.
Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
About a Dozen Google Employees Have Resigned Over Project Maven
(Score: 1, Interesting) by Anonymous Coward on Thursday May 31 2018, @02:17PM
You may feel all fuzzy about this kind of stuff, but I think it is disgusting, and I wouldn't want to work with it. And demand for my skills is high, so I won't starve due to my choice. Many googlers are in the same position.