Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 24 2017, @12:11PM   Printer-friendly
from the would-rather-play-go-fish dept.

A year after AlphaGo beat the top Go player Lee Sedol, it is facing the world's current top player Ke Jie in a set of three matches (AlphaGo played five matches against Lee Sedol and won 4-1). AlphaGo has won the first match, so Ke Jie must win the next two matches in order to defeat AlphaGo. Although AlphaGo beat Ke Jie by only half a point in this match, edging out an opponent by a small margin is a legitimate strategy:

Ke Jie tried to use a strategy he's seen AlphaGo use online before, but that didn't work out for him in the end. Jie should've probably known that AlphaGo must have already played such moves against itself when training, which should also mean that it should know how to "defeat itself" in such scenarios.

A more successful strategy against AlphaGo may be one that AlphaGo hasn't seen before. However, considering Google has shown it millions of matches from top players, coming up with such "unseen moves" may be difficult, especially for a human player who can't watch millions of hours of video to train.

However, according to Hassabis, the AlphaGo AI also seems to have "liberated" Go players when thinking about Go strategies, by making them think that no move is impossible. This could lead to Go players trying out more innovative moves in the future, but it remains to be seen if Ke Jie will try that strategy in future matches against AlphaGo.

Although Google hasn't mentioned anything about this yet, it's likely that both AlphaGo's neural networks as well as the hardware doing all the computations have received significant upgrades from last year. Google recently introduced the Cloud TPU, its second-generation "Tensor Processing Unit," which should have not only have much faster inference performance, but now it comes with high training performance, too. As Google previously used the TPUs to power AlphaGo, it may have also used the next-gen versions to power AlphaGo in the match against Ke Jie.

Along with the Ke Jie vs. AlphaGo matches, there will also be a match between five human players and one AlphaGo instance, as well as a "Pair Go" in which two human players will face each other while assisted by two AlphaGo instances. This intended to demonstrate how Go could continue to exist even after Go-playing AI can routinely beat human players.

Also at NPR.

Previously:
Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks
AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion
AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol
AlphaGo Continues to Crush Human Go Players


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Immerman on Wednesday May 24 2017, @10:49PM

    by Immerman (3985) on Wednesday May 24 2017, @10:49PM (#515168)

    If such an attitude existed, it only betrays a poor understanding of AI. What makes Go interesting for AI research is that it can't be "brute forced" the way Chess can. It's not that it's immune to automation - just that it's been immune to the kinds of automation used for Chess. It's interesting precisely because it's an exceedingly simple set of rules that creates a gamespace so complicated that an AI has to "think" to be able to play a decent game, and so it makes an interesting "nursery project" for developing AI techniques.

    Chess used to be considered interesting, a long time ago, because it was a game that required intelligence to play well, and it was presumed that an AI capable of playing a really good game of Chess would be a big step toward a "thinking" computer. But Chess has a very limited number of moves at any given moment - enough that modern computers can simply simulate out all possible future move sequences, or at least all the branches that don't reach a "this would be a really bad situation - avoid this and don't waste time thinking further ahead" according to relatively simplistic rules. When you're playing against someone who can easily think 20-50 moves ahead, they don't have to be good to beat you - just not atrociously bad. In fact I think an average difficulty chess AI rarely plays more than a few moves out, precisely to avoid being undefeatable by an average human opponent.

    That technique can't be used for Go though, since the fan-out is radically greater - there's typically 10-20x as many legal moves you could make in a Go turn as there are in a Chess turn: While looking 3 turns ahead in Chess might require considering maybe 20^3 = 8,000 possible game board states, doing the same for Go requires closer to 200^3 = 8,000,000 states - doable, but not remotely extendable to looking 20-30 turns ahead. To make things even harder, it's not always obvious who has the advantage in any given state, or how much of one. You can't use simple heuristics to weight the board states like you can with Chess. You need to something more akin to thinking than the simple catalog-and-ranking used for Chess.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2