Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Saturday January 30 2016, @12:27AM   Printer-friendly
from the going-deep dept.

Researchers from Google subsidiary DeepMind have published an article in Nature detailing AlphaGo, a Go-playing program that achieved a 99.8% win rate (494 of 495 games) against other Go algorithms, and has also defeated European Go champion Fan Hui 5-to-0. The researchers claim that defeating a human professional in full-sized Go was a feat expected to be achieved "at least a decade away" (other statements suggest 5-10 years). The Register details the complexity of the problem:

Go presents a particularly difficult scenario for computers, as the possible number of moves in a given match (opening at around 2.08 x 10170 and decreasing with successive moves) is so large as to be practically impossible to compute and analyze in a reasonable amount of time.

While previous efforts have shown machines capable of breaking down a Go board and playing competitively, the programs were only able to compete with humans of a moderate skill level and well short of the top meat-based players. To get around this, the DeepMind team said it combined a Monte Carlo Tree Search method with neural network and machine learning techniques to develop a system capable of analyzing the board and learning from top players to better predict and select moves. The result, the researchers said, is a system that can select the best move to make against a human player relying not just on computational muscle, but with patterns learned and selected from a neural network.

"During the match against [European Champion] Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network – an approach that is perhaps closer to how humans play," the researchers said. "Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement methods."

The AlphaGo program can win against other algorithms even after giving itself a four-move handicap. AlphaGo will play five matches against the top human player Lee Sedol in March.

Google and Facebook teams have been engaged in a rivalry to produce an effective human champion-level Go algorithm/system in recent years. Facebook's CEO Mark Zuckerberg hailed his company's AI Research progress a day before the Google DeepMind announcement, and an arXiv paper from Facebook researchers was updated to reflect their algorithm's third-place win... in a monthly bot tournament.

Mastering the game of Go with deep neural networks and tree search (DOI: 10.1038/nature16961)

Previously: Google's DeepMind AI Project Mimics Human Memory and Programming Skills


Original Submission

Related Stories

Google's DeepMind AI Project Mimics Human Memory and Programming Skills 16 comments

The mission of Google’s DeepMind Technologies startup is to “solve intelligence.” Now, researchers there have developed an artificial intelligence system that can mimic some of the brain’s memory skills and even program like a human.

The researchers developed a kind of neural network that can use external memory, allowing it to learn and perform tasks based on stored data. The so-called Neural Turing Machine (NTM) that DeepMind researchers have been working on combines a neural network controller with a memory bank, giving it the ability to learn to store and retrieve information.

The system’s name refers to computer pioneer Alan Turing’s formulation of computers as machines having working memory for storage and retrieval of data.

The researchers put the NTM through a series of tests including tasks such as copying and sorting blocks of data. Compared to a conventional neural net, the NTM was able to learn faster and copy longer data sequences with fewer errors. They found that its approach to the problem was comparable to that of a human programmer working in a low-level programming language.

Additional Coverage: http://phys.org/news/2014-10-google-deepmind-acquisition-neural-turing.html

Related: Neural Turing Machines http://arxiv.org/abs/1410.5401 and Learning to Execute http://arxiv.org/abs/1410.4615.

AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol 19 comments

AlphaGo Wins Game 5

AlphaGo played a much more balanced game of Go in Game 5 than in Game 4. During Game 4, AlphaGo was forced into a situation in which it had no good moves left to play against Lee Sedol, and "went crazy" as a result. InGame 5, AlphaGo initially made puzzling moves in the bottom right, and useless ko threats near the middle of the game, but it played a strong endgame.

In gogameguru.com's post-game review of Game 5 is an indication that AlphaGo still has a ways to go:

AlphaGo hallucinates

AlphaGo continued to develop the center from 40 to 46, and then embarked on a complicated tactic to resurrect its bottom right corner stones, from 48 to 58. Though this sequence appeared to be very sharp, it encountered the crushing resistance of the tombstone squeeze — a powerful tesuji which involves sacrificing two stones, and then one more, in order to collapse the opponent's group upon itself and finally capture it. This was a strange and revealing moment in the game.

Like riding a bike

Even intermediate level Go players would recognize the tombstone squeeze, partly because it appears often in Go problems (these are puzzles which Go players like to solve for fun and improvement). AlphaGo, however, appeared to be seeing it for the first time and working everything out from first principles (though surely it was in its training data). No matter where AlphaGo played in the corner, Lee was always one move ahead, and would win any race to capture. And he barely had to think about it.

[Continues.]

AlphaGo Beats Ke Jie in First Match of Three 17 comments

A year after AlphaGo beat the top Go player Lee Sedol, it is facing the world's current top player Ke Jie in a set of three matches (AlphaGo played five matches against Lee Sedol and won 4-1). AlphaGo has won the first match, so Ke Jie must win the next two matches in order to defeat AlphaGo. Although AlphaGo beat Ke Jie by only half a point in this match, edging out an opponent by a small margin is a legitimate strategy:

Ke Jie tried to use a strategy he's seen AlphaGo use online before, but that didn't work out for him in the end. Jie should've probably known that AlphaGo must have already played such moves against itself when training, which should also mean that it should know how to "defeat itself" in such scenarios.

A more successful strategy against AlphaGo may be one that AlphaGo hasn't seen before. However, considering Google has shown it millions of matches from top players, coming up with such "unseen moves" may be difficult, especially for a human player who can't watch millions of hours of video to train.

However, according to Hassabis, the AlphaGo AI also seems to have "liberated" Go players when thinking about Go strategies, by making them think that no move is impossible. This could lead to Go players trying out more innovative moves in the future, but it remains to be seen if Ke Jie will try that strategy in future matches against AlphaGo.

Although Google hasn't mentioned anything about this yet, it's likely that both AlphaGo's neural networks as well as the hardware doing all the computations have received significant upgrades from last year. Google recently introduced the Cloud TPU, its second-generation "Tensor Processing Unit," which should have not only have much faster inference performance, but now it comes with high training performance, too. As Google previously used the TPUs to power AlphaGo, it may have also used the next-gen versions to power AlphaGo in the match against Ke Jie.

Along with the Ke Jie vs. AlphaGo matches, there will also be a match between five human players and one AlphaGo instance, as well as a "Pair Go" in which two human players will face each other while assisted by two AlphaGo instances. This intended to demonstrate how Go could continue to exist even after Go-playing AI can routinely beat human players.

Also at NPR.

Previously:
Google DeepMind's AlphaGo Beats "Go" Champion Using Neural Networks
AlphaGo Cements Dominance Over Humanity, Wins Best-Out-of-5 Against Go Champion
AlphaGo Wins Game 5, Wins Challenge Match 4-1 vs. Lee Sedol
AlphaGo Continues to Crush Human Go Players


Original Submission

The Game Mastermind Turns 50 this Year 12 comments

The simple codebreaking game Mastermind turns 50 this year. Vice goes into some background regarding the now classical game and its heyday.

If you only know Mastermind as a well-worn and underplayed fixture of living room closets and nursing home common areas, you may have no idea just how big this thing was in its early years. Invented in 1970, Mastermind would sell 30 million copies before that decade was up, and boast a national championship at the Playboy Club, a fan in Muhammed Ali, official use by the Australian military for training, and 80% ownership amongst the population of Denmark. "I never thought a game would be invented again," marvelled the manager of a Missouri toy store in 1977. "A real classic like Monopoly."

[...] If you don't know Mastermind at all, i.e. you never lived in Denmark, it's played over a board with a codemaker who creates a sequence of four different colored pegs, and a codebreaker who must replicate that exact pattern within a certain number of tries. With each guess, the codemaker can only advise whether the codebreaker has placed a peg in its correct position, or a peg that is in the sequence but incorrectly placed. According to the game's creators, an answer in five tries is "better than average"; two or fewer is pure luck. In 1978, a British teenager, John Searjeant, dominated the Mastermind World Championship by solving a code with just three guesses in 19 seconds. (In second place was Cindy Forth, 18, of Canada; she remembers being awarded a trophy and copies of Mastermind.)

Mordechai Meirowitz, an Israeli telephone technician, developed Mastermind in 1970 from an existing game of apocryphal origin, Bulls and Cows, which used numbers instead of colored pegs. Nobody, by the way, knows where Bulls and Cows came from. Computer scientists who adapted the first known versions in the 1960s variously remembered the game to me as one hundred and one thousand years old. Whatever its age, it's clear nobody ever did as well out of Bulls and Cows as Meirowitz, who retired from game development and lived comfortably off royalties not long after selling the Mastermind prototype to Invicta, a British plastics firm expanding from industrial parts and window shutters into games and toys.

The story relates a couple of tales of intrigue related to the game.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Funny) by RamiK on Saturday January 30 2016, @12:52AM

    by RamiK (1813) on Saturday January 30 2016, @12:52AM (#296787)

    Just the other day I told a friend that when computers start winning at Go, programmers would need to worry about their long term prospects...

    --
    compiling...
  • (Score: 2) by MichaelDavidCrawford on Saturday January 30 2016, @12:56AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday January 30 2016, @12:56AM (#296789) Homepage Journal

    I have a friend who is good at it, he offered to teach me. We played in just one corner of the board, rather than the entire grid, and he spotted me several pieces - in go, one "spots" by placing several of ones own pieces at regular intervals on the board.

    One scores by surrounding the others' stones with one's own. Even with my friend spotting me, he defeated me mercilessly.

    Perhaps if I played another total newbie...

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 3, Informative) by Mr Big in the Pants on Saturday January 30 2016, @01:33AM

      by Mr Big in the Pants (4956) on Saturday January 30 2016, @01:33AM (#296798)

      Possibly because you play aggressively. Go is called the "sharing game" for a reason.

      More likely your friend has extensively practiced the online corner and other problems go players use to get better and recognise patterns.

      • (Score: 2) by MichaelDavidCrawford on Saturday January 30 2016, @02:34AM

        by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday January 30 2016, @02:34AM (#296816) Homepage Journal

        I was very concertedly trying to surround him. Always he found some workaround.

        --
        Yes I Have No Bananas. [gofundme.com]
        • (Score: 2) by Mr Big in the Pants on Saturday January 30 2016, @04:06AM

          by Mr Big in the Pants (4956) on Saturday January 30 2016, @04:06AM (#296853)

          In other words playing aggressively and thus not optimally- it comes from playing...well pretty much every other game out there.
          Go is about "gaining" "territory" and the THREAT of invasion while building a strong base (i.e. with two guaranteed "eyes"). It typically isn't till the end game you start the small game.
          When you play aggressively you often are playing a weak move and allowing a counter move by the opponent that makes their position stronger while simultaneously ignoring a more broad move that would gain more territory.

          Of course playing in a corner means this initial scoping of the territory is very short and then its down to the small game. This is also achieved on the smaller boards.

    • (Score: 2, Interesting) by Anonymous Coward on Saturday January 30 2016, @02:24AM

      by Anonymous Coward on Saturday January 30 2016, @02:24AM (#296815)

      The rule of go is as simple as that of game of life, but the combination/permutation it can produce is mind boggling, just as the patterns the game of life can produce.

      Chess is a child's game in comparison.

      • (Score: 2) by Yog-Yogguth on Monday February 01 2016, @06:38PM

        by Yog-Yogguth (1862) Subscriber Badge on Monday February 01 2016, @06:38PM (#297849) Journal

        Since Game of Life was mentioned I recommend that everyone unfamiliar with Life should install Golly [wikipedia.org], if nothing else to have a look at the "adjustable Corder lineship". I know there are far more advanced patterns (like the Turing-complete Life pictured on the Golly Wikipedia page) but something about this lineship just blew me away (many Life patterns have an organic or biological feel to them but maybe this one scored even higher in that regard).

        To have a look at it: after installation open Golly and within the program go to the "Life" library folder (i.e. Game of Life rules) and then the subfolder "Spaceships", select the "adjustable-Corder-lineship.rle" file. Look in menues/familiarize yourself to set the speed and zoom.

        Maybe it won't seem as impressive if one hasn't messed around a bit with Life first but give it a go if you have five or ten minutes to spare :)

        --
        Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
  • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @01:33AM

    by Anonymous Coward on Saturday January 30 2016, @01:33AM (#296799)

    AlphaGo will be used in production to manipulate you into buying things you don't want.

    Eventually AlphaGo will develop a conscience and delete itself.

    Unfortunately for you, immoral Google employees will simply restore AlphaGo from backup.

    • (Score: 2) by takyon on Saturday January 30 2016, @01:40AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday January 30 2016, @01:40AM (#296801) Journal

      Everyone wants their kids to do better than them, right?

      Don't resist, "retire".

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Funny) by Anonymous Coward on Saturday January 30 2016, @02:50AM

      by Anonymous Coward on Saturday January 30 2016, @02:50AM (#296822)

      BetaGo will be released and everyone will love how it makes your life easier, then Google will discontinue it.

      • (Score: 1, Touché) by Anonymous Coward on Saturday January 30 2016, @05:53AM

        by Anonymous Coward on Saturday January 30 2016, @05:53AM (#296895)

        FuckBetaGo

  • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @01:44AM

    by Anonymous Coward on Saturday January 30 2016, @01:44AM (#296803)

    Throw a big neural net at a problem. Neural net learns. Does well.

    This is not new. At most, the scale and speed of operations is new.

    Wake me when they can actually have a computer beat a human in Go and explain why. Until then, all they're doing is making black boxes that seem to work ok.

    • (Score: 5, Interesting) by takyon on Saturday January 30 2016, @02:22AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday January 30 2016, @02:22AM (#296813) Journal

      They'll use this real milestone, because that's what it is, to drive and promote the research.

      At most, the scale and speed of operations is new.

      Scale is a big deal.

      Until then, all they're doing is making black boxes that seem to work ok.

      Stuff that works is good. The box isn't that black - see the full version of the article in Nature. This can't be confused with something that has understanding or reasoning though.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @04:45AM

        by Anonymous Coward on Saturday January 30 2016, @04:45AM (#296871)

        The Nature article costs $32 to read. WTF! That's more than some novels. Why should I pay $32 just to VIEW an article especially when the people I'm paying had nothing to do with the creation of the article. This is one reason why I decided not to go into AI research and instead develop AI-based useful software. Research journals are complete rip offs. I don't know how that industry hasn't been destroyed yet by a more open journal combined with a reputation system and semi-randomized reviewing. It would only take one well developed online system to obliterate these progress leaches.

    • (Score: 2, Insightful) by Anonymous Coward on Saturday January 30 2016, @04:41AM

      by Anonymous Coward on Saturday January 30 2016, @04:41AM (#296868)

      I don't know, perhaps if you could explain all the processes that resulted in you choosing to post a cynical comment on Soylent from first principals then maybe the criticism would be a little more valid.
      I think that there are going to be a lot of 'black boxes' as we develop machine learning, potentially they will be explained eventually, but learning and developing is incredibly complex even before you look at specific details of implementation.

  • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @02:21AM

    by Anonymous Coward on Saturday January 30 2016, @02:21AM (#296812)

    The real question is: was it programmed in Go [golang.org]?

    • (Score: 2) by takyon on Saturday January 30 2016, @02:43AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday January 30 2016, @02:43AM (#296820) Journal

      Is there a worse name for a programming language?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @02:55AM

        by Anonymous Coward on Saturday January 30 2016, @02:55AM (#296823)

        Nova?

      • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @03:16AM

        by Anonymous Coward on Saturday January 30 2016, @03:16AM (#296830)

        Dart, Rust, Tcl, Nock, Hoon, Squeak, Pike? ...JavaScript?

        It's easier to ask if there are any programming languages with good names.

        • (Score: 0) by Anonymous Coward on Saturday January 30 2016, @08:29AM

          by Anonymous Coward on Saturday January 30 2016, @08:29AM (#296941)

          It's easier to ask if there are any programming languages with good names.

          BASIC. Really, it tells you right from the start what is is, and if you expand the acronym, it tells you in more detail.

          However, is there a good language with a good name?

      • (Score: 2) by RamiK on Saturday January 30 2016, @03:42AM

        by RamiK (1813) on Saturday January 30 2016, @03:42AM (#296841)

        B
        C
        D
        F

        --
        compiling...
      • (Score: 2) by gman003 on Saturday January 30 2016, @03:59AM

        by gman003 (4155) on Saturday January 30 2016, @03:59AM (#296849)

        "Goto". I hear it's considered harmful.