Google promises ethical principles to guide development of military AI
Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?
Also at VentureBeat and Engadget.
Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
About a Dozen Google Employees Have Resigned Over Project Maven
(Score: 3, Touché) by c0lo on Thursday May 31 2018, @02:49PM
Fair enough, you rotten to the core decadent capitalist.
Remember, the capitalism is on the brink of the precipice. Communism is a step forward
Uh, oh. Because if they are not taken, then... what?
You reckon Google won't dare to create war-AI tech without the public display of your moral support (and anonymized financial support via taxes)?
(large grin)
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford