Google is experimenting to see whether its game-playing AIs will learn to cooperate with each other:
When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question.
They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy.
Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.
While the research has no immediate real-world application, it would help DeepMind design artificial intelligence agents that can work together in environments with imperfect information. In the future, such work could help such agents navigate a world full of intelligent entities -- both human and machine -- whether in transport networks or stock markets.
DeepMind blog post. Also at The Verge.
(Score: 3, Interesting) by DannyB on Friday February 10 2017, @05:16PM
1 Timothy 6:10
So yes, greed == malice. Built in, as you say. Just as humans would destroy everything, including the planet they live on, to maximize their own profit, AI could do the same, but more efficiently. So it should be done immediately!
Looking at the last part of that quote, it answers my question about why Trump frowns so much. Never a real, genuine smile of emotion. Sometimes an evil grin. Or a feigned fake smile. Greed, wealth, even surrounding yourself in the trappings of wealth while indebted up to your fake hair does not bring happiness. Power does not bring happiness or peace.
The lower I set my standards the more accomplishments I have.