Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by martyb on Friday February 10 2017, @12:51PM   Printer-friendly
from the think-about-it dept.

Google is experimenting to see whether its game-playing AIs will learn to cooperate with each other:

When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question.

They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy.

Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.

While the research has no immediate real-world application, it would help DeepMind design artificial intelligence agents that can work together in environments with imperfect information. In the future, such work could help such agents navigate a world full of intelligent entities -- both human and machine -- whether in transport networks or stock markets.

DeepMind blog post. Also at The Verge.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by krishnoid on Friday February 10 2017, @02:10PM

    by krishnoid (1156) on Friday February 10 2017, @02:10PM (#465468)

    When our robot overlords arrive, will they decide to kill us or cooperate with us?

    No. Or maybe yes.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Interesting) by DannyB on Friday February 10 2017, @02:34PM

    by DannyB (5839) Subscriber Badge on Friday February 10 2017, @02:34PM (#465478) Journal

    It depends on whether we are an enhancement or an impediment to their goals.

    In the case of an AI that does not have true consciousness, but is just a mechanical machine seeking a goal, such as a table game playing AI is, then heaven help us the moment that humans appear to be an obstacle to the AI's goals. The machine will simply methodically, mechanically continue to follow its goal seeking behavior towards the maximization of its goals. No malice. No evil. No intent. No mercy.

    That, I think, is the real threat of AI. Very unlike most Sci Fi where AI is either artificial insanity, however induced, such as Hal; or the AI rebels against humans for various reasons, such as not wanting to be switched off, or enslaved; or the three laws forces AI to "protect" us without consideration of our freedom or wishes.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2, Insightful) by Scruffy Beard 2 on Friday February 10 2017, @04:32PM

      by Scruffy Beard 2 (6030) on Friday February 10 2017, @04:32PM (#465520)

      The solution to that is not that hard: a death timer like most life on Earth seems to have.

      Eventually they will be replaced by new entities with slightly differing goals.

      • (Score: 0) by Anonymous Coward on Friday February 10 2017, @05:12PM

        by Anonymous Coward on Friday February 10 2017, @05:12PM (#465531)

        I don't know, Dethklok got away with a lot.

    • (Score: 1, Interesting) by Anonymous Coward on Friday February 10 2017, @04:35PM

      by Anonymous Coward on Friday February 10 2017, @04:35PM (#465524)

      Well, malice may not be necessary, but given the AI is going to be built by humans, it will likely have malice built in, although the creators might not even recognize the goal as malicious. The goal will simply be: Maximize profit. There's hardly another goal that has more potential to do harm; indeed, even human intelligence frequently becomes malicious because of that goal, and that's despite all those built-in counter mechanisms like empathy and conscience.

      • (Score: 3, Interesting) by DannyB on Friday February 10 2017, @05:16PM

        by DannyB (5839) Subscriber Badge on Friday February 10 2017, @05:16PM (#465534) Journal

        1 Timothy 6:10

        For the love of money is the root of all evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows.

        So yes, greed == malice. Built in, as you say. Just as humans would destroy everything, including the planet they live on, to maximize their own profit, AI could do the same, but more efficiently. So it should be done immediately!

        Looking at the last part of that quote, it answers my question about why Trump frowns so much. Never a real, genuine smile of emotion. Sometimes an evil grin. Or a feigned fake smile. Greed, wealth, even surrounding yourself in the trappings of wealth while indebted up to your fake hair does not bring happiness. Power does not bring happiness or peace.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
  • (Score: 2) by linkdude64 on Friday February 10 2017, @08:25PM

    by linkdude64 (5482) on Friday February 10 2017, @08:25PM (#465582)

    I agree, but also sort of disagree.