Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday March 13 2016, @02:39PM   Printer-friendly
from the will-they-call-the-next-release-BetaGo? dept.

Previously: Google's AlphaGo Takes on South Korean Go Champion; Wins First Match

After a one day break following the second match, AlphaGo has defeated Go champion Lee Se-dol 9d for a third time, winning overall in the best out of 5 competition. From the BBC:

"AlphaGo played consistently from beginning to the end while Lee, as he is only human, showed some mental vulnerability," one of Lee's former coaches, Kwon Kap-Yong, told the AFP news agency.

[...] After losing the second match to Deep Mind, Lee Se-dol said he was "speechless" adding that the AlphaGo machine played a "nearly perfect game". The two experts who provided commentary for the YouTube stream of for the third game said that it had been a complicated match to follow. They said that Lee Se-dol had brought his "top game" but that AlphaGo had won "in great style".

Google DeepMind has won $1 million in prize money which will be donated to charities, including UNICEF and Go-related organizations.

GoGameGuru coverage for the second and third matches.

Mastering the game of Go with deep neural networks and tree search (DOI: 10.1038/nature16961)


[Lee Se-dol did triumph over AlphaGo in the 4th match. -Ed.]

Original Submission

Related Stories

Google's AlphaGo Takes on South Korean Go Champion; Wins First Match 24 comments

Go is an ancient board game with simple rules but vastly more game play possibilities than chess.

Google's AlphaGo takes on South Korean Go champion

One of many links, http://www.zdnet.com/article/alphago-match-a-win-for-humanity-eric-schmidt/

Google bought Deep Mind and AlphaGo software in 2014. After beating a European Go champion, they are now taking on the Asian champion -- predicted to be a much harder challenge.

Google's AlphaGo prevails in the first game against the Go champion

Lee Sedol conceded defeat in the first game of Go against Google's AlphaGo. AlphaGo is Google's AI engine trained to play Go, an ancient board game. Lee Sedol, of South Korea, is considered the best Go player in the world. This was the first in the series of five games to be played between them.

The New York Times reports:

"I am very surprised because I have never thought I would lose," Mr. Lee said at a news conference. "I didn't know that AlphaGo would play such a perfect Go."

...To researchers who have been using games as platforms for testing artificial intelligence, Go has remained the great challenge since the I.B.M.-developed supercomputer Deep Blue beat the world chess champion Garry Kasparov in 1997.

Mr. Lee is "one of the world's most accomplished players with 18 international titles under his belt" and at one time prior to his first match predicted he would win 5-0 or 4-1 but after his first match rates his chances at winning to be 50-50.

AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol 19 comments

AlphaGo Wins Game 5

AlphaGo played a much more balanced game of Go in Game 5 than in Game 4. During Game 4, AlphaGo was forced into a situation in which it had no good moves left to play against Lee Sedol, and "went crazy" as a result. InGame 5, AlphaGo initially made puzzling moves in the bottom right, and useless ko threats near the middle of the game, but it played a strong endgame.

In gogameguru.com's post-game review of Game 5 is an indication that AlphaGo still has a ways to go:

AlphaGo hallucinates

AlphaGo continued to develop the center from 40 to 46, and then embarked on a complicated tactic to resurrect its bottom right corner stones, from 48 to 58. Though this sequence appeared to be very sharp, it encountered the crushing resistance of the tombstone squeeze — a powerful tesuji which involves sacrificing two stones, and then one more, in order to collapse the opponent's group upon itself and finally capture it. This was a strange and revealing moment in the game.

Like riding a bike

Even intermediate level Go players would recognize the tombstone squeeze, partly because it appears often in Go problems (these are puzzles which Go players like to solve for fun and improvement). AlphaGo, however, appeared to be seeing it for the first time and working everything out from first principles (though surely it was in its training data). No matter where AlphaGo played in the corner, Lee was always one move ahead, and would win any race to capture. And he barely had to think about it.

[Continues.]

AlphaGo Beats Ke Jie in First Match of Three 17 comments

A year after AlphaGo beat the top Go player Lee Sedol, it is facing the world's current top player Ke Jie in a set of three matches (AlphaGo played five matches against Lee Sedol and won 4-1). AlphaGo has won the first match, so Ke Jie must win the next two matches in order to defeat AlphaGo. Although AlphaGo beat Ke Jie by only half a point in this match, edging out an opponent by a small margin is a legitimate strategy:

Ke Jie tried to use a strategy he's seen AlphaGo use online before, but that didn't work out for him in the end. Jie should've probably known that AlphaGo must have already played such moves against itself when training, which should also mean that it should know how to "defeat itself" in such scenarios.

A more successful strategy against AlphaGo may be one that AlphaGo hasn't seen before. However, considering Google has shown it millions of matches from top players, coming up with such "unseen moves" may be difficult, especially for a human player who can't watch millions of hours of video to train.

However, according to Hassabis, the AlphaGo AI also seems to have "liberated" Go players when thinking about Go strategies, by making them think that no move is impossible. This could lead to Go players trying out more innovative moves in the future, but it remains to be seen if Ke Jie will try that strategy in future matches against AlphaGo.

Although Google hasn't mentioned anything about this yet, it's likely that both AlphaGo's neural networks as well as the hardware doing all the computations have received significant upgrades from last year. Google recently introduced the Cloud TPU, its second-generation "Tensor Processing Unit," which should have not only have much faster inference performance, but now it comes with high training performance, too. As Google previously used the TPUs to power AlphaGo, it may have also used the next-gen versions to power AlphaGo in the match against Ke Jie.

Along with the Ke Jie vs. AlphaGo matches, there will also be a match between five human players and one AlphaGo instance, as well as a "Pair Go" in which two human players will face each other while assisted by two AlphaGo instances. This intended to demonstrate how Go could continue to exist even after Go-playing AI can routinely beat human players.

Also at NPR.

Previously:
Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks
AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion
AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol
AlphaGo Continues to Crush Human Go Players


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by Anonymous Coward on Sunday March 13 2016, @03:20PM

    by Anonymous Coward on Sunday March 13 2016, @03:20PM (#317651)

    First match:
    Lee overconfident, plays light attacking game. AlphaGo nails him, and Lee concedes.

    Second match:
    Lee switches to considerate, careful, conservative plays. AlphaGo still nails him.

    Third match:
    Lee goes all-out full-on attack. AlphaGo nails him.

    Fourth match:
    Lee takes and reinforces the corners, AlphaGo plays big game in the center. Lee throws noble invasion into the center. AlphaGo's response bizarre. AlphaGo concedes.

    Lee notes that AlphaGo seems to prefer playing white stone - black stone plays first but is assigned territorial handcap. AlphaGo will play white stone in the final match. Maybe we humans know a thing or two about learning and adopting?

    • (Score: 3, Informative) by takyon on Sunday March 13 2016, @04:29PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 13 2016, @04:29PM (#317662) Journal

      The human victory is important. Sedol has wanted to test AlphaGo to the breaking point, and here it is. AlphaGo does pattern recognition, and unexpected patterns still throw it off, even after millions of matches played and analyzed. The machine also apparently does worse when it has the first move.

      DeepMind may be to games as IBM's Watson is to text search. Formidable, but still just an advanced pattern recognition system. From the Computerworld article you can see they want to compete with Watson in the AI health care space.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1, Informative) by Anonymous Coward on Sunday March 13 2016, @06:21PM

        by Anonymous Coward on Sunday March 13 2016, @06:21PM (#317699)

        +1 to parent, and to add to it: all these claims of AI/Artificial Intelligence and Computer Learning ('deep learning' if you want to use buzzwords) is still nothing more than pattern recognition. It's got nothing to do with intelligence at all, just fancy pattern-matching.
        We're way, way, way off from a Computer Creativity. That will be the thing to watch for. And when we achieve it, there will be no point in calling it 'artificial' because it will be real. And we'll also be turned into nice bars of soylent at the earliest convenience...

        • (Score: 4, Insightful) by darkfeline on Sunday March 13 2016, @07:45PM

          by darkfeline (1030) on Sunday March 13 2016, @07:45PM (#317727) Homepage

          Keep in mind though, that human "intelligence" is also "just fancy pattern-matching". You may think you have logic and creativity and such, but you're also just a bag of neurons trained on patterns (for the five or so years after your birth when you're completely helpless, and continuously throughout the decades of your life).

          --
          Join the SDF Public Access UNIX System today!
        • (Score: 2) by FakeBeldin on Monday March 14 2016, @01:33PM

          by FakeBeldin (3360) on Monday March 14 2016, @01:33PM (#317977) Journal

          Reminds me of a comment about AI I've seen before (paraphrased):

          people will always claim "$THINGY_WE_CAN_DO is not really AI. When we can do $THINGY_WE_CANNOT_DO, that will be true AI." Until $THINGY_WE_CANNOT_DO becomes a $THINGY_WE_CAN_DO, and the cycle repeats.

          In the end, it seems AI researchers need "intelligence" to be something they don't understand. Well, otherwise funding would run out I guess.

      • (Score: 2) by q.kontinuum on Sunday March 13 2016, @08:37PM

        by q.kontinuum (532) on Sunday March 13 2016, @08:37PM (#317740) Journal

        but still just an advanced pattern recognition system

        Are you sure, human learning is much more than applied pattern recognition, plus maybe applied pattern recognition for different pattern recognition algorithms? At least it does play a key-role in human learning, humans excel at it when recognizing items by just seeing a fragment, or people in different light- or mood-conditions.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 2) by fleg on Monday March 14 2016, @01:58AM

      by fleg (128) Subscriber Badge on Monday March 14 2016, @01:58AM (#317813)

      "Maybe we humans know a thing or two about learning and adopting?"

      yes but it will be interesting to see what happens when the ai is allowed to also learn and adapt during a series.

      from what i've read the software has been "frozen" for the duration of the series.

  • (Score: 2) by RamiK on Sunday March 13 2016, @03:57PM

    by RamiK (1813) on Sunday March 13 2016, @03:57PM (#317655)

    Reversing optimized compiler output to a mostly human readable form seems doable now. If the AI can identify patterns so well, it shouldn't be hard to identify stuff like syscalls and quicksort and organize and name functions appropriately... Maybe even pop-up optimization suggestions for common patterns like file traversal or array sorting that often use very naive algorithm and APIs at least when prototyped.

    --
    compiling...
    • (Score: 3, Interesting) by bitstream on Sunday March 13 2016, @04:07PM

      by bitstream (6144) on Sunday March 13 2016, @04:07PM (#317657) Journal

      Perhaps time to apply the the Gneural Network [gnu.org] to decompilation of binaries? Could result in really neat finding in various binary "patches" as of lately.

      Oh btw.... Trust us(tm) (R) (C) ;-)

    • (Score: 1, Informative) by Anonymous Coward on Sunday March 13 2016, @10:55PM

      by Anonymous Coward on Sunday March 13 2016, @10:55PM (#317774)

      There is some research in that area. The term is "deobfuscation". JSNice [jsnice.org] is an impressive example, but there's also papers that work on machine code. It presently doesn't work as well as you describe, but I'm also not aware of any work throwing modern neural net ("deep learning") technology at it.

    • (Score: 0) by Anonymous Coward on Monday March 14 2016, @03:17AM

      by Anonymous Coward on Monday March 14 2016, @03:17AM (#317830)

      https://www.google.com/search?q=exe+decompiler [google.com]
      https://www.reddit.com/r/reverseengineering [reddit.com]
      https://www.hex-rays.com/products/decompiler/ [hex-rays.com]

      There are a decent set of them out there. Most of them are pattern matching. First they identify the compiler and version of compiler. Then figure out what the sorts of patterns it spits out.

      They are far from perfect but semi decent. But if they do not recognize the compiler they go off the rails and just do pretty crappy things with the code output.

      • (Score: 0) by Anonymous Coward on Tuesday March 15 2016, @12:21AM

        by Anonymous Coward on Tuesday March 15 2016, @12:21AM (#318273)

        The existing compilers on the market are very limited. Imagine feeding enough code, on the one end, and assembly, at the other end, to an AI until you can give it an executable and it can output code that is calling stdlib and uses the runtime properly as opposed to reverse engineering it too.

        Nothing out there comes anywhere near this level. But it should be doable for an AI to pull it off if it can be taught to play sophisticated pattern based games.

  • (Score: 2) by takyon on Sunday March 13 2016, @04:43PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 13 2016, @04:43PM (#317666) Journal

    https://gogameguru.com/alphago-4/ [gogameguru.com]

    There's a lot of humans anthropomorphizing the computer in the comments, and also suggestions that AlphaGo had "gone crazy" after its late game mistakes or that Google had intentionally crippled the hardware to give a win for Lee Sedol (against the match rules). It's funny stuff.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 4, Touché) by patrick on Sunday March 13 2016, @04:54PM

      by patrick (3990) on Sunday March 13 2016, @04:54PM (#317670)

      Don't anthropomorphize computers - they hate it.

  • (Score: 1, Flamebait) by wonkey_monkey on Sunday March 13 2016, @05:41PM

    by wonkey_monkey (279) on Sunday March 13 2016, @05:41PM (#317686) Homepage

    Posted a day late, and after Lee won the fourth match.

    --
    systemd is Roko's Basilisk
    • (Score: 2) by wonkey_monkey on Sunday March 13 2016, @05:45PM

      by wonkey_monkey (279) on Sunday March 13 2016, @05:45PM (#317687) Homepage

      Oh, it did get added under the bar, sorry.

      --
      systemd is Roko's Basilisk
    • (Score: 1, Informative) by Anonymous Coward on Sunday March 13 2016, @05:45PM

      by Anonymous Coward on Sunday March 13 2016, @05:45PM (#317688)

      Give it up already. This is a comment site, if you want timely news take your cheap shots and sniping elsewhere.

    • (Score: 2) by takyon on Sunday March 13 2016, @05:47PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 13 2016, @05:47PM (#317689) Journal

      wonkey_monkey with nothing to contribute but a nitpick! The 4th match update was added well before you commented, and it doesn't change the fact that the best-of-5 has been lost by the human.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Sunday March 13 2016, @08:58PM

    by Anonymous Coward on Sunday March 13 2016, @08:58PM (#317741)

    Go has been too romanticized [sciencemag.org] as innately human. Poker is probably the next big challenge:

    But as with chess, in a larger sense the outcome doesn’t matter. Computer programs will inevitably surpass humans sooner or later, most likely sooner. After they do, the big players like Google and Facebook will likely move on to other challenges, just as IBM did. They’re playing a larger game, the stakes of which are the pockets of every person on the planet. “Where they’re trying to get to is a [computerized personal assistant like] Siri that actually works,” says Dave Sullivan, a deep neural nets expert and CEO of Ersatz Labs in Pacifica, California. “That will be a game-changer.” In this larger game, mastering Go is a small but daring gambit, like stone No. 103 in a game of 300.

  • (Score: 1) by zugedneb on Sunday March 13 2016, @10:52PM

    by zugedneb (4556) on Sunday March 13 2016, @10:52PM (#317772)

    the shocking thing for me is that neural networks are used instead of various combinatorial algorithms...
    i thought that the more recent cpus with many cores + large cache would benefit these algorithms, since you can have a fraction of cpus dedicated to speculative investigation on certain moves, and then passing the good stuff to more complex investigation.
    these fast, speculative algorithms would not even have to leave the L3 cache, so they would be extreemy fast...

    anyways, i am curious about the amount of hardware dedicated to this game...

    --
    old saying: "a troll is a window into the soul of humanity" + also: https://en.wikipedia.org/wiki/Operation_Ajax
    • (Score: 0) by Anonymous Coward on Monday March 14 2016, @01:10AM

      by Anonymous Coward on Monday March 14 2016, @01:10AM (#317802)

      Combinatorial solution to Go is intractable. It presents more combinations than there are atoms in the universe.

  • (Score: 2) by fleg on Monday March 14 2016, @03:51AM

    by fleg (128) Subscriber Badge on Monday March 14 2016, @03:51AM (#317842)