To say that AlphaGo had a great run in the competitive Go scene would be an understatement: it has just defeated the world's number 1 Go player, Ke Jie, in a three-part match. Now that it has nothing left to prove, the AI is hanging up its boots and leaving the world of competitive Go behind. AlphaGo's developers from Google-owned DeepMind will now focus on creating advanced general algorithms to help scientists find elusive cures for diseases, conjure up a way to dramatically reduce energy consumption and invent new revolutionary materials.
Before they leave Go behind completely, though, they plan to publish one more paper later this year to reveal how they tweaked the AI to prepare it for the matches against Ke Jie. They're also developing a tool that would show how AlphaGo would respond to a particular situation on the Go board with help from the world's number one player.
Google DeepMind researchers have made their old AlphaGo program obsolete:
The old AlphaGo relied on a computationally intensive Monte Carlo tree search to play through Go scenarios. The nodes and branches created a much larger tree than AlphaGo practically needed to play. A combination of reinforcement learning and human-supervised learning was used to build "value" and "policy" neural networks that used the search tree to execute gameplay strategies. The software learned from 30 million moves played in human-on-human games, and benefited from various bodges and tricks to learn to win. For instance, it was trained from master-level human players, rather than picking it up from scratch.
AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google's custom TPU1 chips to play matches, compared to AlphaGo's several machines and 48 TPUs. Since Zero didn't rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs over a few days by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.
Though self-play AlphaGo Zero even discovered for itself, without human intervention, classic moves in the theory of Go, such as fuseki opening tactics, and what's called life and death. More details can be found in Nature, or from the paper directly here. Stanford computer science academic Bharath Ramsundar has a summary of the more technical points, here.
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.