Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 24 2017, @12:11PM   Printer-friendly
from the would-rather-play-go-fish dept.

A year after AlphaGo beat the top Go player Lee Sedol, it is facing the world's current top player Ke Jie in a set of three matches (AlphaGo played five matches against Lee Sedol and won 4-1). AlphaGo has won the first match, so Ke Jie must win the next two matches in order to defeat AlphaGo. Although AlphaGo beat Ke Jie by only half a point in this match, edging out an opponent by a small margin is a legitimate strategy:

Ke Jie tried to use a strategy he's seen AlphaGo use online before, but that didn't work out for him in the end. Jie should've probably known that AlphaGo must have already played such moves against itself when training, which should also mean that it should know how to "defeat itself" in such scenarios.

A more successful strategy against AlphaGo may be one that AlphaGo hasn't seen before. However, considering Google has shown it millions of matches from top players, coming up with such "unseen moves" may be difficult, especially for a human player who can't watch millions of hours of video to train.

However, according to Hassabis, the AlphaGo AI also seems to have "liberated" Go players when thinking about Go strategies, by making them think that no move is impossible. This could lead to Go players trying out more innovative moves in the future, but it remains to be seen if Ke Jie will try that strategy in future matches against AlphaGo.

Although Google hasn't mentioned anything about this yet, it's likely that both AlphaGo's neural networks as well as the hardware doing all the computations have received significant upgrades from last year. Google recently introduced the Cloud TPU, its second-generation "Tensor Processing Unit," which should have not only have much faster inference performance, but now it comes with high training performance, too. As Google previously used the TPUs to power AlphaGo, it may have also used the next-gen versions to power AlphaGo in the match against Ke Jie.

Along with the Ke Jie vs. AlphaGo matches, there will also be a match between five human players and one AlphaGo instance, as well as a "Pair Go" in which two human players will face each other while assisted by two AlphaGo instances. This intended to demonstrate how Go could continue to exist even after Go-playing AI can routinely beat human players.

Also at NPR.

Previously:
Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks
AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion
AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol
AlphaGo Continues to Crush Human Go Players


Original Submission

Related Stories

Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks 23 comments

Researchers from Google subsidiary DeepMind have published an article in Nature detailing AlphaGo, a Go-playing program that achieved a 99.8% win rate (494 of 495 games) against other Go algorithms, and has also defeated European Go champion Fan Hui 5-to-0. The researchers claim that defeating a human professional in full-sized Go was a feat expected to be achieved "at least a decade away" (other statements suggest 5-10 years). The Register details the complexity of the problem:

Go presents a particularly difficult scenario for computers, as the possible number of moves in a given match (opening at around 2.08 x 10170 and decreasing with successive moves) is so large as to be practically impossible to compute and analyze in a reasonable amount of time.

While previous efforts have shown machines capable of breaking down a Go board and playing competitively, the programs were only able to compete with humans of a moderate skill level and well short of the top meat-based players. To get around this, the DeepMind team said it combined a Monte Carlo Tree Search method with neural network and machine learning techniques to develop a system capable of analyzing the board and learning from top players to better predict and select moves. The result, the researchers said, is a system that can select the best move to make against a human player relying not just on computational muscle, but with patterns learned and selected from a neural network.

"During the match against [European Champion] Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network – an approach that is perhaps closer to how humans play," the researchers said. "Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement methods."

The AlphaGo program can win against other algorithms even after giving itself a four-move handicap. AlphaGo will play five matches against the top human player Lee Sedol in March.

Google and Facebook teams have been engaged in a rivalry to produce an effective human champion-level Go algorithm/system in recent years. Facebook's CEO Mark Zuckerberg hailed his company's AI Research progress a day before the Google DeepMind announcement, and an arXiv paper from Facebook researchers was updated to reflect their algorithm's third-place win... in a monthly bot tournament.

Mastering the game of Go with deep neural networks and tree search (DOI: 10.1038/nature16961)

Previously: Google's DeepMind AI Project Mimics Human Memory and Programming Skills


Original Submission

AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion 22 comments

Previously: Google's AlphaGo Takes on South Korean Go Champion; Wins First Match

After a one day break following the second match, AlphaGo has defeated Go champion Lee Se-dol 9d for a third time, winning overall in the best out of 5 competition. From the BBC:

"AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability," one of Lee's former coaches, Kwon Kap-Yong, told the AFP news agency.

[...] After losing the second match to Deep Mind, Lee Se-dol said he was "speechless" adding that the AlphaGo machine played a "nearly perfect game". The two experts who provided commentary for the YouTube stream of for the third game said that it had been a complicated match to follow. They said that Lee Se-dol had brought his "top game" but that AlphaGo had won "in great style".

Google DeepMind has won $1 million in prize money which will be donated to charities, including UNICEF and Go-related organizations.

GoGameGuru coverage for the second and third matches.

Mastering the game of Go with deep neural networks and tree search (DOI: 10.1038/nature16961)


[Lee Se-dol did triumph over AlphaGo in the 4th match. -Ed.]

Original Submission

AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol 19 comments

AlphaGo Wins Game 5

AlphaGo played a much more balanced game of Go in Game 5 than in Game 4. During Game 4, AlphaGo was forced into a situation in which it had no good moves left to play against Lee Sedol, and "went crazy" as a result. InGame 5, AlphaGo initially made puzzling moves in the bottom right, and useless ko threats near the middle of the game, but it played a strong endgame.

In gogameguru.com's post-game review of Game 5 is an indication that AlphaGo still has a ways to go:

AlphaGo hallucinates

AlphaGo continued to develop the center from 40 to 46, and then embarked on a complicated tactic to resurrect its bottom right corner stones, from 48 to 58. Though this sequence appeared to be very sharp, it encountered the crushing resistance of the tombstone squeeze — a powerful tesuji which involves sacrificing two stones, and then one more, in order to collapse the opponent's group upon itself and finally capture it. This was a strange and revealing moment in the game.

Like riding a bike

Even intermediate level Go players would recognize the tombstone squeeze, partly because it appears often in Go problems (these are puzzles which Go players like to solve for fun and improvement). AlphaGo, however, appeared to be seeing it for the first time and working everything out from first principles (though surely it was in its training data). No matter where AlphaGo played in the corner, Lee was always one move ahead, and would win any race to capture. And he barely had to think about it.

[Continues.]

AlphaGo Continues to Crush Human Go Players 36 comments

AlphaGo has won another 50 games against the world's top Go players, this time with little fanfare:

DeepMind's AlphaGo is back, and it's been secretly crushing the world's best Go players over the past couple of weeks. The new version of the AI has played 51 games online and won 50 of them, including a victory against Ke Jie, currently the world's best human Go player. Amusingly, the 51st game wasn't even a loss; it was drawn after the Internet connection dropped out. [...] Following its single game loss [in a match against Lee Sedol], DeepMind has been hard at work on a new and improved version of AlphaGo—and it appears the AI is back bigger, better, and more undefeated than ever. DeepMind's co-founder Demis Hassabis announced on Twitter yesterday that "the new version of AlphaGo" had been playing "some unofficial online games" on the Tygem and FoxGo servers under the names Magister (P) and Master (P). It played 51 games in total against some of the world's best players, including Ke Jie, Gu Li, and Lee Sedol—and didn't lose a single one.

That isn't to say that AlphaGo's unofficial games went unnoticed, though. Over the last week, a number of forum threads have popped up to discuss this mystery debutante who has been thrashing the world's best players. Given its unbeaten record and some very "non-human" moves, most onlookers were certain that Master and Magister were being played by an AI—they just weren't certain if it was AlphaGo, or perhaps another AI out of China or Japan. It is somewhat unclear, but it seems that DeepMind didn't warn the opponents that they were playing against AlphaGo. Perhaps they were told after their games had concluded, though. Ali Jabarin, a professional Go player, apparently bumped into Ke Jie after he'd been beaten by the AI: "He [was] a bit shocked... just repeating 'it's too strong.'"

Will there still be "Go celebrities" once DeepMind has finished mopping the floor with them and turned their attention elsewhere?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Informative) by Anonymous Coward on Wednesday May 24 2017, @01:11PM (3 children)

    by Anonymous Coward on Wednesday May 24 2017, @01:11PM (#514787)

    what the hell?
    it will exist even if most computers will beat the crap out of any go champion.
    the point of checkers/chess/go/etc is to develop abstract thinking in humans, and many humans subsequently find them fun to play.
    I know how to solve Rubik's cube, but I still solve it from time to time because I enjoy doing it (don't know why).

    Obviously it would be problematic if 10 years from now people play go for money and they use computers to cheat, but that is unrelated to people who only play because they enjoy it, and they will keep the game alive.

    I can barely wait for my oldest to be old enough to play go.
    He's three now, and is starting to get into domino (we have a set with colourful trucks that's perfect for him).
    We still play it, even though a computer could probably beat the crap out of any of us at domino (I've never actually played before playing with him).

    • (Score: 3, Informative) by hendrikboom on Wednesday May 24 2017, @02:04PM

      by hendrikboom (1125) Subscriber Badge on Wednesday May 24 2017, @02:04PM (#514808) Homepage Journal

      I'm rated at 14 kyu (better than a beginner, but worse than a serious amateur). There are lots and lots of human players that are far better than I am, and I still enjoy playing the game. I don't see that the existence of supercompetent computer players is going to change that.

    • (Score: 2) by Immerman on Wednesday May 24 2017, @10:20PM (1 child)

      by Immerman (3985) on Wednesday May 24 2017, @10:20PM (#515151)

      Agreed. Chess-playing AIs have been able to wipe the floor with Grandmasters for decades now, and yet people still play Chess regularly. Ditto checkers, Risk, Monopoly, and any number of other games that have had well-developed AIs built for them. Heck, the vast majority of people playing any game against an AI are probably playing at a low-to-medium difficulty simply because that's what offers a decent balance of challenge and chance of victory for them - the fact that a super-sophisticaed AI somewhere in the world can consistently defeat the best human players is utterly irrelevant to anyone except the AI developers themselves, and perhaps the top-ranking Go players who can now find a decent AI opponent at 2am when they can't sleep.

      The entire concept is only a few spli hairs from saying "New Sexbot design manages to consistently outperform best human lovers, humans expected to stop having sex in response."

      • (Score: 0) by Anonymous Coward on Wednesday May 24 2017, @10:23PM

        by Anonymous Coward on Wednesday May 24 2017, @10:23PM (#515155)

        I can seriously fuck up the chess AI on beginner level. It makes me feel so manly and tough. I've also got a collection of female sex bots that TOTALLY worship my cock. It's insane.

  • (Score: 2) by kazzie on Wednesday May 24 2017, @01:29PM (1 child)

    by kazzie (5309) Subscriber Badge on Wednesday May 24 2017, @01:29PM (#514797)

    From reading the summary I thought the computer assistance would be in the style of a helper you could consult at any time, but TFA states it's more of a tag-team system:

    Pair Go - a game of Go between two human players who will be assisted by two different AlphaGo instances. The humans will alternate their moves with their AlphaGo assistants.

    • (Score: 2) by FatPhil on Wednesday May 24 2017, @03:13PM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Wednesday May 24 2017, @03:13PM (#514837) Homepage

      On the assumption that the humans are significantly worse than alphago, then this gives an advantage to the human who plays first, as alphago gets to get the upper hand first.

      abABabABabABabABabABabAB
        ^   ^   ^   ^   ^   ^   at this point, aA has the advantage, as alphago A has played more of the moves than alphago B.
         ^   ^   ^   ^   ^   ^  at this point, bB has only caught up, bB never gets the change to pull ahead.

      A better sequence of play would be:

      abABaBAbabABaBAbabABaBAb
        ^       ^       ^       at this point, aA has the advantage as alphago A has played more of the moves than alphago B
         ^       ^       ^      at this point, bB has only caught up
           ^       ^       ^    at this point, bB has the advantage, as alphago B has played more of the moves than alphago A
            ^       ^       ^   at this point, aA has only caught up

      That's not perfect, as aA gets the play order advantage earlier, and bB plays catchup on the number of times its had the play order advantage, but never draws ahead, the alphago plays should really be ABBABAABBAABABBA... (fractally). Which gives you something more like:

      abABaBAb aBAbabAB aBAbabAB abABaBAb aBAbabAB abABaBAb abABaBAb aBAbabAB ....

      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by FatPhil on Wednesday May 24 2017, @03:43PM (5 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Wednesday May 24 2017, @03:43PM (#514856) Homepage
    On one hand - good! Ever since Deep Blue, the Go world has had a supercilious attitude that somehow Go was the kind of game that could never be dominated by AIs, as the game and move space was too large for any computer that could ever exist to be able to explore to any depth. So my schadenfreud lobe is pulsing "fuck'em, they ain't so special, and now they know it".

    On the other hand, this machine is a massively parallel and massively serial collection of interconnected smaller parts, the layers learn from the output from lower layers. And in the human Go world a student learns from more experienced players (which is more akin to back-propogation learning). So why should the alphago Borg get away with only having to play a single solitary human, that's clearly not fair. Play it against a team of 100 humans (split into 10 lots of 10, say, to cut down the communication overhead - let each team of 10 debate moves, and then one spokesman brings the most-liked suggestion(s) to a final committee that decides the final move to make)? Having said that, according to TFA, there was a 5:1 team game, which apparently happened in the past, and the result hasn't been reported. But why stop at 5?

    Has there been a post-mortem of the match yet - have experts found mistakes in Jie's play?
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 0) by Anonymous Coward on Wednesday May 24 2017, @05:26PM

      by Anonymous Coward on Wednesday May 24 2017, @05:26PM (#514959)

      In chess generally two players are not much, if at all, better than the strongest of the two independently.

      I'd imagine this is even more true in Go where short term sharp tactical situations (where 'blindness' can become a factor) don't play such a major role.

    • (Score: 2) by Immerman on Wednesday May 24 2017, @10:49PM

      by Immerman (3985) on Wednesday May 24 2017, @10:49PM (#515168)

      If such an attitude existed, it only betrays a poor understanding of AI. What makes Go interesting for AI research is that it can't be "brute forced" the way Chess can. It's not that it's immune to automation - just that it's been immune to the kinds of automation used for Chess. It's interesting precisely because it's an exceedingly simple set of rules that creates a gamespace so complicated that an AI has to "think" to be able to play a decent game, and so it makes an interesting "nursery project" for developing AI techniques.

      Chess used to be considered interesting, a long time ago, because it was a game that required intelligence to play well, and it was presumed that an AI capable of playing a really good game of Chess would be a big step toward a "thinking" computer. But Chess has a very limited number of moves at any given moment - enough that modern computers can simply simulate out all possible future move sequences, or at least all the branches that don't reach a "this would be a really bad situation - avoid this and don't waste time thinking further ahead" according to relatively simplistic rules. When you're playing against someone who can easily think 20-50 moves ahead, they don't have to be good to beat you - just not atrociously bad. In fact I think an average difficulty chess AI rarely plays more than a few moves out, precisely to avoid being undefeatable by an average human opponent.

      That technique can't be used for Go though, since the fan-out is radically greater - there's typically 10-20x as many legal moves you could make in a Go turn as there are in a Chess turn: While looking 3 turns ahead in Chess might require considering maybe 20^3 = 8,000 possible game board states, doing the same for Go requires closer to 200^3 = 8,000,000 states - doable, but not remotely extendable to looking 20-30 turns ahead. To make things even harder, it's not always obvious who has the advantage in any given state, or how much of one. You can't use simple heuristics to weight the board states like you can with Chess. You need to something more akin to thinking than the simple catalog-and-ranking used for Chess.

    • (Score: 2) by bziman on Thursday May 25 2017, @02:43AM (2 children)

      by bziman (3577) on Thursday May 25 2017, @02:43AM (#515252)

      this machine is a massively parallel

      Actually, the version of AlphaGo that beat Ke Jie earlier this week was running on a single computer (according to DeepMind [youtube.com]).

      Anyway, the human brain is generally much more efficient than any computer, and is the result of millions of years of evolution... I think all the training that AlphaGo gets is totally fair. Running that network on a single computer is hugely impressive. DeepBlue never accomplished that!

      • (Score: 0) by Anonymous Coward on Thursday May 25 2017, @05:07AM

        by Anonymous Coward on Thursday May 25 2017, @05:07AM (#515295)

        "Anyway, the human brain is generally much more efficient than any computer, and is the result of millions of years of evolution..."

        The human brain is more efficient in some particular areas. However, our brains lose massively in others. Today computers can do more than 6 billion floating point operations per second per watt while complying with with IEEE 754-1985. A human brain is about 20 watts, and most of them would struggle to reach 1 floating point operation per minute. The human brain is outdone over by a factor of over 7 trillion.

        So sure, there are some areas where the human brain is better by a significant amount, perhaps 200x in this case (I would be surprised if deep mind was above 4000 watts when running on the new generation of TPUs), but I really doubt there are any cases left where humans are better by by more than 10000x. In the cases where there is a really big difference, its the human thats totally outmatched. I think thats a sign of things to come.

      • (Score: 2) by FatPhil on Thursday May 25 2017, @09:21AM

        by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Thursday May 25 2017, @09:21AM (#515365) Homepage
        Erm, the coprocessor it was running on was *massively parallel*, one of the most massively parallel processors I've ever hard of (but I don't religiously follow GPU announcements).
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by art guerrilla on Wednesday May 24 2017, @06:22PM (1 child)

    by art guerrilla (3082) on Wednesday May 24 2017, @06:22PM (#514996)

    ...where's my two hundred dollars ? ? ?

  • (Score: 1) by unhandyandy on Thursday May 25 2017, @02:55AM (2 children)

    by unhandyandy (4405) on Thursday May 25 2017, @02:55AM (#515257)

    Ke Jie and AlphaGo are playing a 3 game match. AlphaGo has won the first game; the outcome of the match is not yet decided.

    Why is it so hard for journalists to get this right? Are they just copying text from the last stupid article?

    • (Score: 2) by FatPhil on Thursday May 25 2017, @10:23AM (1 child)

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Thursday May 25 2017, @10:23AM (#515377) Homepage
      Did he win the first board of a 3 board game?
      Did he win the first game of a 3 game match?
      Did he win the first match of a 3 match tourney?

      Arbitrary splitting or grouping is arbitrary, arbitrary naming is arbitrary. Stop pretending that your choice of words is the one true choice of words.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(1)