Google promises ethical principles to guide development of military AI
Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
[...] But the question facing these employees (and Google itself) is: where do you draw the line? Does using machine learning to analyze surveillance footage for the military count as "weaponized AI"? Probably not. But what if that analysis informs future decisions about drone strikes? Does it matter then? How would Google even know if this had happened?
Also at VentureBeat and Engadget.
Previously: Google vs Maven
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
About a Dozen Google Employees Have Resigned Over Project Maven
(Score: 2) by aristarchus on Thursday May 31 2018, @07:27PM (2 children)
Fine, except there is no legally declared state of war in the world presently, and thus there are no "actual" combatants. There are only victims of extra-judicial killings. Being involved in facilitating that is against the CoC of humans.
(Score: 1, Interesting) by Anonymous Coward on Thursday May 31 2018, @09:47PM (1 child)
He's a carrion eating bird for fucks sake, he WANTS more human deaths, he THRIVES on horror.
(Score: 2) by The Mighty Buzzard on Thursday May 31 2018, @11:18PM
Nah, I prefer catfish.
My rights don't end where your fear begins.