Google promises ethical principles to guide development of military AI
Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?
Also at VentureBeat and Engadget.
Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
About a Dozen Google Employees Have Resigned Over Project Maven
(Score: 2) by takyon on Thursday May 31 2018, @08:38AM (1 child)
Drones and mechs, delivering swift autonomous death to our poorer enemies! Get with the program already!
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 4, Insightful) by c0lo on Thursday May 31 2018, @08:51AM
That's fun and dandy, I suppose.
Until the Army start selling its surplus to law enforcement (all legal, of couse [nytimes.com]), at which point the US citizens will start looking at the wrong end of the tech.
But yeah, the MIC must never stop showing profits, ethics be damned!
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford