Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 24 2017, @12:11PM   Printer-friendly
from the would-rather-play-go-fish dept.

A year after AlphaGo beat the top Go player Lee Sedol, it is facing the world's current top player Ke Jie in a set of three matches (AlphaGo played five matches against Lee Sedol and won 4-1). AlphaGo has won the first match, so Ke Jie must win the next two matches in order to defeat AlphaGo. Although AlphaGo beat Ke Jie by only half a point in this match, edging out an opponent by a small margin is a legitimate strategy:

Ke Jie tried to use a strategy he's seen AlphaGo use online before, but that didn't work out for him in the end. Jie should've probably known that AlphaGo must have already played such moves against itself when training, which should also mean that it should know how to "defeat itself" in such scenarios.

A more successful strategy against AlphaGo may be one that AlphaGo hasn't seen before. However, considering Google has shown it millions of matches from top players, coming up with such "unseen moves" may be difficult, especially for a human player who can't watch millions of hours of video to train.

However, according to Hassabis, the AlphaGo AI also seems to have "liberated" Go players when thinking about Go strategies, by making them think that no move is impossible. This could lead to Go players trying out more innovative moves in the future, but it remains to be seen if Ke Jie will try that strategy in future matches against AlphaGo.

Although Google hasn't mentioned anything about this yet, it's likely that both AlphaGo's neural networks as well as the hardware doing all the computations have received significant upgrades from last year. Google recently introduced the Cloud TPU, its second-generation "Tensor Processing Unit," which should have not only have much faster inference performance, but now it comes with high training performance, too. As Google previously used the TPUs to power AlphaGo, it may have also used the next-gen versions to power AlphaGo in the match against Ke Jie.

Along with the Ke Jie vs. AlphaGo matches, there will also be a match between five human players and one AlphaGo instance, as well as a "Pair Go" in which two human players will face each other while assisted by two AlphaGo instances. This intended to demonstrate how Go could continue to exist even after Go-playing AI can routinely beat human players.

Also at NPR.

Previously:
Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks
AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion
AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol
AlphaGo Continues to Crush Human Go Players


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bziman on Thursday May 25 2017, @02:43AM (2 children)

    by bziman (3577) on Thursday May 25 2017, @02:43AM (#515252)

    this machine is a massively parallel

    Actually, the version of AlphaGo that beat Ke Jie earlier this week was running on a single computer (according to DeepMind [youtube.com]).

    Anyway, the human brain is generally much more efficient than any computer, and is the result of millions of years of evolution... I think all the training that AlphaGo gets is totally fair. Running that network on a single computer is hugely impressive. DeepBlue never accomplished that!

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday May 25 2017, @05:07AM

    by Anonymous Coward on Thursday May 25 2017, @05:07AM (#515295)

    "Anyway, the human brain is generally much more efficient than any computer, and is the result of millions of years of evolution..."

    The human brain is more efficient in some particular areas. However, our brains lose massively in others. Today computers can do more than 6 billion floating point operations per second per watt while complying with with IEEE 754-1985. A human brain is about 20 watts, and most of them would struggle to reach 1 floating point operation per minute. The human brain is outdone over by a factor of over 7 trillion.

    So sure, there are some areas where the human brain is better by a significant amount, perhaps 200x in this case (I would be surprised if deep mind was above 4000 watts when running on the new generation of TPUs), but I really doubt there are any cases left where humans are better by by more than 10000x. In the cases where there is a really big difference, its the human thats totally outmatched. I think thats a sign of things to come.

  • (Score: 2) by FatPhil on Thursday May 25 2017, @09:21AM

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Thursday May 25 2017, @09:21AM (#515365) Homepage
    Erm, the coprocessor it was running on was *massively parallel*, one of the most massively parallel processors I've ever hard of (but I don't religiously follow GPU announcements).
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves