Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by martyb on Thursday October 19, @02:39PM   Printer-friendly
from the Zeroing-in-on-AI dept.

Google DeepMind researchers have made their old AlphaGo program obsolete:

The old AlphaGo relied on a computationally intensive Monte Carlo tree search to play through Go scenarios. The nodes and branches created a much larger tree than AlphaGo practically needed to play. A combination of reinforcement learning and human-supervised learning was used to build "value" and "policy" neural networks that used the search tree to execute gameplay strategies. The software learned from 30 million moves played in human-on-human games, and benefited from various bodges and tricks to learn to win. For instance, it was trained from master-level human players, rather than picking it up from scratch.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google's custom TPU1 chips to play matches, compared to AlphaGo's several machines and 48 TPUs. Since Zero didn't rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs over a few days by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

Though self-play AlphaGo Zero even discovered for itself, without human intervention, classic moves in the theory of Go, such as fuseki opening tactics, and what's called life and death. More details can be found in Nature, or from the paper directly here. Stanford computer science academic Bharath Ramsundar has a summary of the more technical points, here.

Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.

Previously: Google's New TPUs are Now Much Faster -- will be Made Available to Researchers
Google's AlphaGo Wins Again and Retires From Competition


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough

Mark All as Read

Mark All as Unread

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by fyngyrz on Thursday October 19, @03:45PM (6 children)

    by fyngyrz (6567) Subscriber Badge on Thursday October 19, @03:45PM (#584606) Homepage Journal

    This is it, folks.

    Go constitutes a complete environment that can be simulated in every aspect, including what constitutes success. That provides an also-complete learning space in which the machine learning system can explore right to or very near to the boundaries, given that it's good enough to do so, as Google's Go ML system has demonstrated it is.

    Most real-world problems are not of this nature. Driving a vehicle, for instance, cannot be fully simulated; nor can cleaning a home, washing dishes, sex, etc. They can be partially simulated, but there's no definitive, easily applicable determination of "success" that can be applied by the software, because the environment isn't static (a go board is.) There may be rules that can get an ML system part way there, but the nature of these things is that the rules will be broken - people and real environments do unpredictable things that fall well outside the rules, and often these things are unlike all other things experienced until that incident.

    Solving these type of real-world problems requires constant analytical re-evaluation of local success; you can't "can" the required skill(s), because these tasks are inherently amorphous and undefined until the moment of time they occur. People do this all the time, it's one of our key strengths. ML systems to date don't do it at all, because they comprise decision-making networks that are wholly based upon past experience.

    Go (and other complex board games such as chess) are special cases, because there is a fully constrained rules set, and those rules are inherent in both gameplay and the self-analytical definition of success. ML systems can be fed the rules, an absolute and definitive definition of success, let loose in the game-space until they achieve whatever level of that pre-defined success they are capable of. Once that's done, they're that competent from then on. Compare that to a car that's been taught to drive in a city environment, and then let loose in a gravel pit full of running construction machinery and a Cessna making an emergency landing, or a war zone, or a temporary detour, etc.

    These problem spaces are very unlike one another, and we need more than the ML techniques we have to date to address them well.

    As part of my own research work on artificial intelligence and artificial consciousness, I have coined the term low-dimensionsional neural-like systemsLDNLS [fyngyrz.com] – to describe simple-compentency ML systems that implement, as yet, no intelligence – the "I" in "AI." My expectation is that we'll see stacked LDNLS systems in chassis that implement multiple near-competencies: for instance, for a domestic robot, you'd likely have a stack of independent ML systems that did a decent job of addressing dishwashing, lawn-mowing, cat-box maintenance, vacuuming, window-washing, etc. Step outside those competencies, and you'd have a useless hunk of hardware. I expect such a chassis to appear shortly, and what amount to "apps" for specific task competencies to become available on an ad-hoc, as-needed basis. Which will likely be monetized. Such a chassis won't constitute AI, because it will most certainly not be intelligent; but it won't matter, because like any appliance, it'll do what you have been led to expect it will do and in so doing, unload and enable the consumer, and that's exactly what consumers want in such a space.

    TL;DR:

    This is it, folks.

    Probably not. :)

    --
    The eyes are the windows to the soul.
    Sunglasses are the window shades.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Informative=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Thursday October 19, @03:56PM (3 children)

    by Anonymous Coward on Thursday October 19, @03:56PM (#584619)

    They can be partially simulated, but there's no definitive, easily applicable determination of "success" that can be applied by the software, because the environment isn't static (a go board is.)

    So maybe the next goal would be an AI that can win on Nomic? After all, the whole point of Nomic is changing the rules. With a majority of humans in the game, the machine will not even be able to predict all the rules that will be in effect at the next turn.

    • (Score: 2) by fyngyrz on Thursday October 19, @09:15PM (2 children)

      by fyngyrz (6567) Subscriber Badge on Thursday October 19, @09:15PM (#584890) Homepage Journal

      I suspect it'll go the other way around; AI will come from (somewhere), and then you'll have a system that will have a chance to win on/at Nomic.

      However, there will also be a question, at that point, of whether the AI cares to play Nomic in the first place. Once you have a system that can locally analyze the value of doing something, it'll use that to evaluate whether it should engage in the associated undertaking. Because... intelligent.

      Unless we implement manufactured intelligences as outright slaves. I hope we don't do that. I don't think it will go well for us if we do. If we want that kind of service, stacked LDNLS systems are the way to go, specifically because they are in no wise intelligent entities, they're just (very) elaborate mechanisms. They'll keep getting better, and perhaps the AIs will even help us with them, if and when AI arises.

      Slavery is bad, mmmm'kay?

      --
      The eyes are the windows to the soul.
      Sunglasses are the window shades.
      • (Score: 2, Disagree) by maxwell demon on Thursday October 19, @10:18PM (1 child)

        by maxwell demon (1608) Subscriber Badge on Thursday October 19, @10:18PM (#584942) Journal

        However, there will also be a question, at that point, of whether the AI cares to play Nomic in the first place. Once you have a system that can locally analyze the value of doing something, it'll use that to evaluate whether it should engage in the associated undertaking. Because... intelligent.

        You seem to be under the delusion that there is a set of values that you can derive from rational thought alone.

        It doesn't work that way. No matter how much you think, you'll always at some point arrive at some other value that you simply have to assume. You may end up at values that come straight out of evolution (an intelligent being that doesn't value its own life likely won't survive long), or at values that your parents (or any other people you accepted as moral authorities) taught you at young age and which you never dared to question (or which just to question you already consider a morally bad thing to do, probably again because someone taught you so).

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by rylyeh on Thursday October 19, @10:22PM

          by rylyeh (6726) Subscriber Badge <{kadath} {at} {gmail.com}> on Thursday October 19, @10:22PM (#584946)

          A point of view is a slippery thing.

          --
          "Wing framework tubular or glandular, of lighter grey, with orifices at wing tips. Spread wings have serrated edge."
  • (Score: 2) by turgid on Thursday October 19, @04:28PM

    by turgid (4318) on Thursday October 19, @04:28PM (#584639) Journal

    That's reassuring then. There's still time for my Secret Plan for World Domination (TM).

    --
    Don't let Righty keep you down.
  • (Score: 2) by rylyeh on Thursday October 19, @10:19PM

    by rylyeh (6726) Subscriber Badge <{kadath} {at} {gmail.com}> on Thursday October 19, @10:19PM (#584943)

    Absolutely. Go is a zero-sum game of 'Perfect' information as defined in game theory [URL:https://en.wikipedia.org/wiki/Game_theory/]. That's what makes it a great way to test ML.
    Non-zero-sum games of imperfect information more closely represent the class of problems like self-driving, emergency response, diagnosis and system analysis.
    Although I hope it's not needed, artificially aware AI (with a meaningful point of view) may be necessary to solve some of them.

    --
    "Wing framework tubular or glandular, of lighter grey, with orifices at wing tips. Spread wings have serrated edge."