Google pledges $25 million toward AI solutions for social issues
[Google] is ramping up a AI Impact Challenge that asks academia, non-profits and other organizations (whether they're AI-savvy or not) to submit proposals using AI to solve "social, humanitarian and environmental" problems. Any proposal that makes the cut will receive funding from a $25 million pool, join an accelerator program and receive consulting as well as custom support with the help of the data science non-profit DataKind. Google will pick the winners in spring 2019 with assistance from a panel of experts.
Google.org announcement. Google.org. Also at The Verge and Reuters.
Related: Google Will Not Continue Project Maven After Contract Expires in 2019
(Score: 5, Funny) by c0lo on Monday October 29 2018, @11:12PM (3 children)
My proposal: solve the issue of "personal data privacy and the cessation of online tracking" as one of the social problem.
You reckon it will be accepted by Google?
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 2) by legont on Monday October 29 2018, @11:58PM (1 child)
I have a very specific proposal in this space. Imagine an AI that would spend time online instead of me - clicking likes and reading news and adds - just pretending to be me but with different configurable properties - say hard working professional; liberal NY style socialite. I even have a rather clear plan how to implement one.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @10:15PM
Done and done. [reddit.com]
(Score: 1, Touché) by Anonymous Coward on Tuesday October 30 2018, @12:41AM
Hey, Google is pledging $25 million, not $700 billion that they are worth today.
(Score: 2) by Bot on Monday October 29 2018, @11:26PM (1 child)
Because the end justifying the means has never caused any problems. Especially when you talk about the good of the people, of the nation, of the international community.
Account abandoned.
(Score: 1) by Ethanol-fueled on Tuesday October 30 2018, @12:56AM
Google have pretty much said, "Friendship with America ended, now China is my new best friend and fuck human rights and all that" and this little benevolent streak from them is a big "fuck you" to the few employees and others who don't yet hate them. It's like a rich fat dude making a starving homeless person dance for his amusement before throwing him a table scrap and telling him to fuck off.
It makes me want to drive up there just to go to the bars Google employees frequent and ask them (before laughing at them) how they're coping with their employer openly treating them as suckers. The double-whammy of Hillary and now their employer openly admitting an evil streak really must be tough for those little snowflakes to reconcile.
Those men need purpose -- and a good place to start would be to pick up arms and assist the military and Border Patrol in stopping the invading horde. The battle against the Orcs will bring strength and glory to those green conscripts.
(Score: 2) by krishnoid on Monday October 29 2018, @11:35PM (1 child)
"All that is necessary for the triumph of evil AI is that good AI does nothing (or doesn't exist)."
"The only thing that stops a guy with a bad AI is a guy with a good AI."
I'll go with it. I for one welcome our new benevolent artificial intelligences.
(Score: 4, Interesting) by takyon on Monday October 29 2018, @11:41PM
* Artilects [wikipedia.org]. Much shorter.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @01:00AM (3 children)
What if the AI is basically Ronald Reagan or Margaret Thatcher in making policies? What if it was Chairman Mao? Are you going to disagree with the AI that you helped create? If so, what is the point of all that effort just to disagree with policies made by AI when you can simply disagree with any current crop of politicians just fine?
If it was Friend Computer, it'll be treason to disagree.
What I also worry about is the nebulous term "Good", it's like a mirage that appears differently to everyone looking at it, but just what is it? Whose "Good" is the best in the long run? Who gets to measure the "Good" objectively?
(Score: 2) by c0lo on Tuesday October 30 2018, @01:57AM
As in "Don't be evil", right? Right?
What a question!
Google, of course, as a consequence of the Golden Rule.
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @06:14AM (1 child)
Yeah that's what I'm thinking. If their Ai starts acting like a Conservative, will they pull the plug or start fuckign with the algorithm? (obviously actual self-reflection if that happens is out of the question).
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @07:09AM
You reckon that's the typical reaction of Google to Conservatives? Or it's only to Conservatives-like acting AI?
(Score: 2) by captain_nifty on Tuesday October 30 2018, @04:35PM (1 child)
I can not think of any social problem, that is not quantitatively improved by reducing the number of humans.
Less humans = Less problems, therefore -> kill all humans.
Seriously for supposedly smart people, they are being incredibly dumb, this is how you build skynet!
(Score: 0) by Anonymous Coward on Thursday November 01 2018, @10:14AM
We need skynet to protect our borders! Besides computers don't make mistakes.
(Score: 0) by Anonymous Coward on Tuesday October 30 2018, @09:57PM
We can fight human trafficing, rape, low wages, drug abuse, housing shortages, identity theft, and gun runners.
What we'll do is use AI to recognize people crossing our borders, then have it alert ICE. We'll also use it to detect people deeper in the USA who may have overstayed a visa or may be about to pop out an anchor baby, and again we alert ICE.
Google will love this.