Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday October 19, @02:39PM   Printer-friendly
from the Zeroing-in-on-AI dept.

Google DeepMind researchers have made their old AlphaGo program obsolete:

The old AlphaGo relied on a computationally intensive Monte Carlo tree search to play through Go scenarios. The nodes and branches created a much larger tree than AlphaGo practically needed to play. A combination of reinforcement learning and human-supervised learning was used to build "value" and "policy" neural networks that used the search tree to execute gameplay strategies. The software learned from 30 million moves played in human-on-human games, and benefited from various bodges and tricks to learn to win. For instance, it was trained from master-level human players, rather than picking it up from scratch.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google's custom TPU1 chips to play matches, compared to AlphaGo's several machines and 48 TPUs. Since Zero didn't rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs over a few days by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

Though self-play AlphaGo Zero even discovered for itself, without human intervention, classic moves in the theory of Go, such as fuseki opening tactics, and what's called life and death. More details can be found in Nature, or from the paper directly here. Stanford computer science academic Bharath Ramsundar has a summary of the more technical points, here.

Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.

Previously: Google's New TPUs are Now Much Faster -- will be Made Available to Researchers
Google's AlphaGo Wins Again and Retires From Competition


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough

Mark All as Read

Mark All as Unread

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by HiThere on Thursday October 19, @07:09PM (5 children)

    by HiThere (866) on Thursday October 19, @07:09PM (#584765)

    Sorry, but I think you're highly overenthusiastic about this. It's a significant step along the way, but I still don't expect the Singularity before 2030...and 2035 wouldn't surprise me.

    OTOH, this *is* a significant step along the way. So are chips specialized for neural net computers being in mass production. (But I bet they discover problems with the first generation of chip.)

    Alpha Go seems to have mastered nearly completely one aspect of a "general intelligence" program. But it's a specialized part...abstract pattern recognition in 2D space with known boundaries and rules. Now that's not a small part of what intelligence is, but it's a long way from the whole thing. A true general intelligence would start off not knowing how many dimensions the problem existed in, what the rules were, etc. and derive those from a method of sensing state and recognition of goal achievement. And one shouldn't be surprised if it derives a totally different set of rules than humans use, though it might be difficult to recognize that as the operations possible to perform should be the same. (We are talking about learning to play Go.)

    But note that the true general intelligence would start off not even recognizing the board or the pieces. They would be derived from experience. So would the idea of "a game", or, more particularly, "a game of Go". This means it's got to observe go being played, and notice that when it models that activity it receives a simulation of the desired reward.

    There's an old saying in programming that whenever you don't think the language is flexible enough to do what you want you just need to add another level of indirection. That's sort of what I'm considering here, although as I look at it, it looks like more nearly two or three additional layers of indirection.

    --
    Put not your faith in princes.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by turgid on Thursday October 19, @07:14PM (2 children)

    by turgid (4318) on Thursday October 19, @07:14PM (#584771) Journal

    To me it sounds like an engineering problem now, ie one of scale, not of science.

    --
    Don't let Righty keep you down.
    • (Score: 0) by Anonymous Coward on Thursday October 19, @07:39PM (1 child)

      by Anonymous Coward on Thursday October 19, @07:39PM (#584799)

      No, because it doesn't cover any featurelist on any fairly comprehensive theory of mind. While it covers (a limited form of) (contextually restricted) perception and it has (well-defined) (verifiable) motivations and (strongly constrained) (contextually restricted) capabilities for action, and if you close one eye and squint with the other you can pretend that it has some degree of imagination, there's no reason to believe on any level that it has a degree of self-awareness on a par with that of a mouse, nor that it is equipped to achieve that.

      Basically, this is in the hierarchy of intelligences a little above the cockroach level. If that. And that's a structural issue, not one of scale.

      IAAAIR

      TL;DR: It's a cute science project, but giving it MOAR SINAPSEZ won't make it differently intelligent.

      • (Score: 2) by takyon on Thursday October 19, @08:09PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday October 19, @08:09PM (#584829) Journal

        I agree. Google's TPUs, Intel's Nervana, Nvidia's Tesla, etc. are just accelerators for 8-bit machine learning. They are capable of increasing the performance (and perf/Watt) of machine learning tasks by a lot compared to previous chips, but even if they got a hundred times better and passed the Turing test it couldn't be called real intelligence.

        If real "strong AI" is going to come from any hardware, it will likely be using neuromorphic designs such as IBM's TrueNorth or this thing [soylentnews.org]. What's more, these designs use so little power that they could be scaled up to 3D without the same overheating issues, allowing more artificial "synapses" and even greater resemblance to the human brain. If that approach stalls out, you simply need to connect several of them with a high-speed interconnect.

        We're not far off from making that happen. We're reaching the limits of Moore's law (as far as we know) and 3D NAND chips are commonplace. I wouldn't be surprised if it "strong AI" is about to be created [nextbigfuture.com] or already has been created, and corporations or the govt/military are keeping it under wraps to continue development while avoiding IP issues and the inevitable public/ethics debates. I doubt it would be hard to find comrades for a domestic terrorist group hell bent on destroying the technology by any means necessary.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by takyon on Thursday October 19, @07:40PM (1 child)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday October 19, @07:40PM (#584800) Journal

    I still don't expect the Singularity before 2030...and 2035 wouldn't surprise me.

    That's an aggressive prediction. Even Ray Kurzweil, Prophet of the Singularity, says 2045 for the Singularity. He has 2029 as a date for (strong?) AI passing the Turing test, but still says [futurism.com] 2045 for the Singularity.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by HiThere on Thursday October 19, @11:38PM

      by HiThere (866) on Thursday October 19, @11:38PM (#584992)

      Welll.... it's also true that different people mean different things when they say "the Singularity". I don't think human level AI will necessarily be able to pass the Turing test...and I know a number of people who couldn't. The Turing test depends too heavily on the judges to have much significance.

      FWIW, it's already true that in some areas AI's are superhuman, but they aren't yet generalized enough. And I don't think an actual "general AI" is even possible. And I don't think humans are "general intelligence". Humans have a lot of areas where they are smart, and a lot of areas where they can be trained to be smart, but I don't believe that we cover the spectrum of possible intelligences. To be specific, it's my belief that in areas that require dealing simultaneously with more than seven variables people are essentially untrainable. Possibly seven is slightly too low a number, there may be some people who can be trained to handle that, but that's my current estimate. However, switch it to 17 and I doubt there would be many who would claim to be able to handle it...and they'd all be obviously deluded. So we're just talking about (by analogy) maximum stack depth.

      Once you get rid of the idea that a general intelligence is even possible you start noticing that in a lot of places it isn't even desirable. It's better to have communicating simpler processes. And we don't know just how much can be done at that level even using the tools that already exist.

      So as I said, I expect the Singularity in the period 2030-2035. And it won't look like any of the predictions.

      --
      Put not your faith in princes.