Google pledges $25 million toward AI solutions for social issues
[Google] is ramping up a AI Impact Challenge that asks academia, non-profits and other organizations (whether they're AI-savvy or not) to submit proposals using AI to solve "social, humanitarian and environmental" problems. Any proposal that makes the cut will receive funding from a $25 million pool, join an accelerator program and receive consulting as well as custom support with the help of the data science non-profit DataKind. Google will pick the winners in spring 2019 with assistance from a panel of experts.
Google.org announcement. Google.org. Also at The Verge and Reuters.
Related: Google Will Not Continue Project Maven After Contract Expires in 2019
Related Stories
We have recently covered the fact that some Google employees had resigned because of the company's involvement in an AI-related weapons project called Maven. Many thought that the resignations, whilst being a noble gesture, would amount to nothing - but we were wrong...
Leaked Emails Show Google Expected Lucrative Military Drone AI Work To Grow Exponentially
Google has sought to quash the internal dissent in conversations with employees. Diane Greene, the chief executive of Google’s cloud business unit, speaking at a company town hall meeting following the revelations, claimed that the contract was “only” for $9 million, according to the New York Times, a relatively minor project for such a large company.
Internal company emails obtained by The Intercept tell a different story. The September emails show that Google’s business development arm expected the military drone artificial intelligence revenue to ramp up from an initial $15 million to an eventual $250 million per year.
In fact, one month after news of the contract broke, the Pentagon allocated an additional $100 million to Project Maven.
The internal Google email chain also notes that several big tech players competed to win the Project Maven contract. Other tech firms such as Amazon were in the running, one Google executive involved in negotiations wrote. (Amazon did not respond to a request for comment.) Rather than serving solely as a minor experiment for the military, Google executives on the thread stated that Project Maven was “directly related” to a major cloud computing contract worth billions of dollars that other Silicon Valley firms are competing to win.
However, Google has had a major rethink.
(Score: 5, Funny) by c0lo on Monday October 29 2018, @11:12PM (3 children)
My proposal: solve the issue of "personal data privacy and the cessation of online tracking" as one of the social problem.
You reckon it will be accepted by Google?
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by legont on Monday October 29 2018, @11:58PM (1 child)
I have a very specific proposal in this space. Imagine an AI that would spend time online instead of me - clicking likes and reading news and adds - just pretending to be me but with different configurable properties - say hard working professional; liberal NY style socialite. I even have a rather clear plan how to implement one.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @10:15PM
Done and done. [reddit.com]
(Score: 1, Touché) by Anonymous Coward on Tuesday October 30 2018, @12:41AM
Hey, Google is pledging $25 million, not $700 billion that they are worth today.
(Score: 2) by Bot on Monday October 29 2018, @11:26PM (1 child)
Because the end justifying the means has never caused any problems. Especially when you talk about the good of the people, of the nation, of the international community.
Account abandoned.
(Score: 1) by Ethanol-fueled on Tuesday October 30 2018, @12:56AM
Google have pretty much said, "Friendship with America ended, now China is my new best friend and fuck human rights and all that" and this little benevolent streak from them is a big "fuck you" to the few employees and others who don't yet hate them. It's like a rich fat dude making a starving homeless person dance for his amusement before throwing him a table scrap and telling him to fuck off.
It makes me want to drive up there just to go to the bars Google employees frequent and ask them (before laughing at them) how they're coping with their employer openly treating them as suckers. The double-whammy of Hillary and now their employer openly admitting an evil streak really must be tough for those little snowflakes to reconcile.
Those men need purpose -- and a good place to start would be to pick up arms and assist the military and Border Patrol in stopping the invading horde. The battle against the Orcs will bring strength and glory to those green conscripts.
(Score: 2) by krishnoid on Monday October 29 2018, @11:35PM (1 child)
"All that is necessary for the triumph of evil AI is that good AI does nothing (or doesn't exist)."
"The only thing that stops a guy with a bad AI is a guy with a good AI."
I'll go with it. I for one welcome our new benevolent artificial intelligences.
(Score: 4, Interesting) by takyon on Monday October 29 2018, @11:41PM
* Artilects [wikipedia.org]. Much shorter.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @01:00AM (3 children)
What if the AI is basically Ronald Reagan or Margaret Thatcher in making policies? What if it was Chairman Mao? Are you going to disagree with the AI that you helped create? If so, what is the point of all that effort just to disagree with policies made by AI when you can simply disagree with any current crop of politicians just fine?
If it was Friend Computer, it'll be treason to disagree.
What I also worry about is the nebulous term "Good", it's like a mirage that appears differently to everyone looking at it, but just what is it? Whose "Good" is the best in the long run? Who gets to measure the "Good" objectively?
(Score: 2) by c0lo on Tuesday October 30 2018, @01:57AM
As in "Don't be evil", right? Right?
What a question!
Google, of course, as a consequence of the Golden Rule.
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @06:14AM (1 child)
Yeah that's what I'm thinking. If their Ai starts acting like a Conservative, will they pull the plug or start fuckign with the algorithm? (obviously actual self-reflection if that happens is out of the question).
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @07:09AM
You reckon that's the typical reaction of Google to Conservatives? Or it's only to Conservatives-like acting AI?
(Score: 2) by captain_nifty on Tuesday October 30 2018, @04:35PM (1 child)
I can not think of any social problem, that is not quantitatively improved by reducing the number of humans.
Less humans = Less problems, therefore -> kill all humans.
Seriously for supposedly smart people, they are being incredibly dumb, this is how you build skynet!
(Score: 0) by Anonymous Coward on Thursday November 01 2018, @10:14AM
We need skynet to protect our borders! Besides computers don't make mistakes.
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @09:57PM
We can fight human trafficing, rape, low wages, drug abuse, housing shortages, identity theft, and gun runners.
What we'll do is use AI to recognize people crossing our borders, then have it alert ICE. We'll also use it to detect people deeper in the USA who may have overstayed a visa or may be about to pop out an anchor baby, and again we alert ICE.
Google will love this.