Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by martyb on Friday February 10 2017, @12:51PM   Printer-friendly
from the think-about-it dept.

Google is experimenting to see whether its game-playing AIs will learn to cooperate with each other:

When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question.

They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy.

Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.

While the research has no immediate real-world application, it would help DeepMind design artificial intelligence agents that can work together in environments with imperfect information. In the future, such work could help such agents navigate a world full of intelligent entities -- both human and machine -- whether in transport networks or stock markets.

DeepMind blog post. Also at The Verge.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Touché) by Anonymous Coward on Friday February 10 2017, @02:25PM

    by Anonymous Coward on Friday February 10 2017, @02:25PM (#465474)

    this particular AI can't tell whether the other agents are human or AI.
    at most, it will decide to cooperate or not based on the agents' actions, which would be, under what we humans claim are our moral guidelines, perfectly fair.

    Starting Score:    0  points
    Moderation   +1  
       Touché=1, Total=1
    Extra 'Touché' Modifier   0  

    Total Score:   1