Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times [nytimes.com]. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?
Previously: Google vs Maven [soylentnews.org]
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War" [soylentnews.org]
About a Dozen Google Employees Have Resigned Over Project Maven [soylentnews.org]