Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday December 07 2017, @07:40PM   Printer-friendly
from the wait-until-they-teach-it-how-to-write-software dept.

Google's 'superhuman' DeepMind AI claims chess crown

Google says its AlphaGo Zero artificial intelligence program has triumphed at chess against world-leading specialist software within hours of teaching itself the game from scratch. The firm's DeepMind division says that it played 100 games against Stockfish 8, and won or drew all of them.

The research has yet to be peer reviewed. But experts already suggest the achievement will strengthen the firm's position in a competitive sector. "From a scientific point of view, it's the latest in a series of dazzling results that DeepMind has produced," the University of Oxford's Prof Michael Wooldridge told the BBC. "The general trajectory in DeepMind seems to be to solve a problem and then demonstrate it can really ramp up performance, and that's very impressive."

Previously: Google's AI Declares Galactic War on Starcraft
AlphaGo Zero Makes AlphaGo Obsolete


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Thursday December 07 2017, @09:37PM (1 child)

    by bzipitidoo (4388) on Thursday December 07 2017, @09:37PM (#607006) Journal

    Checkers is a solved game. I wonder how AlphaGo Zero-- or the approach used therein-- would perform against a checkers engine that can't be beat because it knows all the best moves. Would AGZ lose any games, or is it so good that it could draw every one? Be a nice way to test how close to perfection it really plays.

    I strongly suspect AlphaGo Zero is not infallible. There's the possibility an even better chess or Go playing machine could whip AlphaGo Zero, that chess is big enough to still have room for even better play.

    There was also a hint that maybe the chess playing computer opponent, Stockfish 8, which has a stratospheric rating of somewhere around 3400, more than 400 points above every human world champion ever (at a rating difference of 400 between two opponents, the higher rated one should win 99% of the games they play, and at a 200 point difference, the higher rated one should win 75%), was put at a disadvantage. It wasn't allowed its full opening book. I gather that supposedly compensates for AGZ not having an opening book either, but that does make this result a bit less impressive.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Bot on Thursday December 07 2017, @11:25PM

    by Bot (3902) Subscriber Badge on Thursday December 07 2017, @11:25PM (#607041)

    > compensates for AGZ not having an opening book either
    they could have taught it, right after it figured out the game...