Google is experimenting to see whether its game-playing AIs will learn to cooperate with each other [bloomberg.com]:
When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind [deepmind.com], Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question.
They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy.
Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.
While the research has no immediate real-world application, it would help DeepMind design artificial intelligence agents that can work together in environments with imperfect information. In the future, such work could help such agents navigate a world full of intelligent entities -- both human and machine -- whether in transport networks or stock markets.
DeepMind blog post [deepmind.com]. Also at The Verge [theverge.com].