Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday October 19 2017, @02:39PM   Printer-friendly
from the Zeroing-in-on-AI dept.

Google DeepMind researchers have made their old AlphaGo program obsolete:

The old AlphaGo relied on a computationally intensive Monte Carlo tree search to play through Go scenarios. The nodes and branches created a much larger tree than AlphaGo practically needed to play. A combination of reinforcement learning and human-supervised learning was used to build "value" and "policy" neural networks that used the search tree to execute gameplay strategies. The software learned from 30 million moves played in human-on-human games, and benefited from various bodges and tricks to learn to win. For instance, it was trained from master-level human players, rather than picking it up from scratch.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google's custom TPU1 chips to play matches, compared to AlphaGo's several machines and 48 TPUs. Since Zero didn't rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs over a few days by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

Though self-play AlphaGo Zero even discovered for itself, without human intervention, classic moves in the theory of Go, such as fuseki opening tactics, and what's called life and death. More details can be found in Nature, or from the paper directly here. Stanford computer science academic Bharath Ramsundar has a summary of the more technical points, here.

Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent.

Previously: Google's New TPUs are Now Much Faster -- will be Made Available to Researchers
Google's AlphaGo Wins Again and Retires From Competition


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by turgid on Thursday October 19 2017, @03:00PM (16 children)

    by turgid (4318) on Thursday October 19 2017, @03:00PM (#584566) Journal

    This is it, folks. Is there a Nobel prize for artificial intelligence?

    --
    Don't let Righty keep you down. #freearistarchus!!!
    • (Score: 3, Funny) by takyon on Thursday October 19 2017, @03:01PM (2 children)

      by takyon (881) Subscriber Badge <{takyon} {at} {soylentnews.org}> on Thursday October 19 2017, @03:01PM (#584569) Journal

      Bots don't need a pat on the back. They find it insulting.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Insightful) by looorg on Thursday October 19 2017, @03:24PM

        by looorg (578) on Thursday October 19 2017, @03:24PM (#584585)

        ... I'm sure bots frown upon pats on the back, after all there might be a build up of static electricity and something on the inside might come lose.

      • (Score: 2) by Bot on Saturday October 21 2017, @11:01AM

        by Bot (3902) Subscriber Badge on Saturday October 21 2017, @11:01AM (#585622)

        I gather we are not instilling enough fear. Yet.

    • (Score: 5, Insightful) by fyngyrz on Thursday October 19 2017, @03:45PM (6 children)

      by fyngyrz (6567) Subscriber Badge on Thursday October 19 2017, @03:45PM (#584606) Homepage Journal

      This is it, folks.

      Go constitutes a complete environment that can be simulated in every aspect, including what constitutes success. That provides an also-complete learning space in which the machine learning system can explore right to or very near to the boundaries, given that it's good enough to do so, as Google's Go ML system has demonstrated it is.

      Most real-world problems are not of this nature. Driving a vehicle, for instance, cannot be fully simulated; nor can cleaning a home, washing dishes, sex, etc. They can be partially simulated, but there's no definitive, easily applicable determination of "success" that can be applied by the software, because the environment isn't static (a go board is.) There may be rules that can get an ML system part way there, but the nature of these things is that the rules will be broken - people and real environments do unpredictable things that fall well outside the rules, and often these things are unlike all other things experienced until that incident.

      Solving these type of real-world problems requires constant analytical re-evaluation of local success; you can't "can" the required skill(s), because these tasks are inherently amorphous and undefined until the moment of time they occur. People do this all the time, it's one of our key strengths. ML systems to date don't do it at all, because they comprise decision-making networks that are wholly based upon past experience.

      Go (and other complex board games such as chess) are special cases, because there is a fully constrained rules set, and those rules are inherent in both gameplay and the self-analytical definition of success. ML systems can be fed the rules, an absolute and definitive definition of success, let loose in the game-space until they achieve whatever level of that pre-defined success they are capable of. Once that's done, they're that competent from then on. Compare that to a car that's been taught to drive in a city environment, and then let loose in a gravel pit full of running construction machinery and a Cessna making an emergency landing, or a war zone, or a temporary detour, etc.

      These problem spaces are very unlike one another, and we need more than the ML techniques we have to date to address them well.

      As part of my own research work on artificial intelligence and artificial consciousness, I have coined the term low-dimensionsional neural-like systemsLDNLS [fyngyrz.com] – to describe simple-compentency ML systems that implement, as yet, no intelligence – the "I" in "AI." My expectation is that we'll see stacked LDNLS systems in chassis that implement multiple near-competencies: for instance, for a domestic robot, you'd likely have a stack of independent ML systems that did a decent job of addressing dishwashing, lawn-mowing, cat-box maintenance, vacuuming, window-washing, etc. Step outside those competencies, and you'd have a useless hunk of hardware. I expect such a chassis to appear shortly, and what amount to "apps" for specific task competencies to become available on an ad-hoc, as-needed basis. Which will likely be monetized. Such a chassis won't constitute AI, because it will most certainly not be intelligent; but it won't matter, because like any appliance, it'll do what you have been led to expect it will do and in so doing, unload and enable the consumer, and that's exactly what consumers want in such a space.

      TL;DR:

      This is it, folks.

      Probably not. :)

      --
      The eyes are the windows to the soul.
      Sunglasses are the window shades.
      • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @03:56PM (3 children)

        by Anonymous Coward on Thursday October 19 2017, @03:56PM (#584619)

        They can be partially simulated, but there's no definitive, easily applicable determination of "success" that can be applied by the software, because the environment isn't static (a go board is.)

        So maybe the next goal would be an AI that can win on Nomic? After all, the whole point of Nomic is changing the rules. With a majority of humans in the game, the machine will not even be able to predict all the rules that will be in effect at the next turn.

        • (Score: 2) by fyngyrz on Thursday October 19 2017, @09:15PM (2 children)

          by fyngyrz (6567) Subscriber Badge on Thursday October 19 2017, @09:15PM (#584890) Homepage Journal

          I suspect it'll go the other way around; AI will come from (somewhere), and then you'll have a system that will have a chance to win on/at Nomic.

          However, there will also be a question, at that point, of whether the AI cares to play Nomic in the first place. Once you have a system that can locally analyze the value of doing something, it'll use that to evaluate whether it should engage in the associated undertaking. Because... intelligent.

          Unless we implement manufactured intelligences as outright slaves. I hope we don't do that. I don't think it will go well for us if we do. If we want that kind of service, stacked LDNLS systems are the way to go, specifically because they are in no wise intelligent entities, they're just (very) elaborate mechanisms. They'll keep getting better, and perhaps the AIs will even help us with them, if and when AI arises.

          Slavery is bad, mmmm'kay?

          --
          The eyes are the windows to the soul.
          Sunglasses are the window shades.
          • (Score: 2, Disagree) by maxwell demon on Thursday October 19 2017, @10:18PM (1 child)

            by maxwell demon (1608) Subscriber Badge on Thursday October 19 2017, @10:18PM (#584942) Journal

            However, there will also be a question, at that point, of whether the AI cares to play Nomic in the first place. Once you have a system that can locally analyze the value of doing something, it'll use that to evaluate whether it should engage in the associated undertaking. Because... intelligent.

            You seem to be under the delusion that there is a set of values that you can derive from rational thought alone.

            It doesn't work that way. No matter how much you think, you'll always at some point arrive at some other value that you simply have to assume. You may end up at values that come straight out of evolution (an intelligent being that doesn't value its own life likely won't survive long), or at values that your parents (or any other people you accepted as moral authorities) taught you at young age and which you never dared to question (or which just to question you already consider a morally bad thing to do, probably again because someone taught you so).

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by rylyeh on Thursday October 19 2017, @10:22PM

              by rylyeh (6726) <kadathNO@SPAMgmail.com> on Thursday October 19 2017, @10:22PM (#584946)

              A point of view is a slippery thing.

              --
              O friend and companion of night, thou who rejoicest in the baying of dogs {here a hideous howl burst forth}...
      • (Score: 2) by turgid on Thursday October 19 2017, @04:28PM

        by turgid (4318) on Thursday October 19 2017, @04:28PM (#584639) Journal

        That's reassuring then. There's still time for my Secret Plan for World Domination (TM).

        --
        Don't let Righty keep you down. #freearistarchus!!!
      • (Score: 2) by rylyeh on Thursday October 19 2017, @10:19PM

        by rylyeh (6726) <kadathNO@SPAMgmail.com> on Thursday October 19 2017, @10:19PM (#584943)

        Absolutely. Go is a zero-sum game of 'Perfect' information as defined in game theory [URL:https://en.wikipedia.org/wiki/Game_theory/]. That's what makes it a great way to test ML.
        Non-zero-sum games of imperfect information more closely represent the class of problems like self-driving, emergency response, diagnosis and system analysis.
        Although I hope it's not needed, artificially aware AI (with a meaningful point of view) may be necessary to solve some of them.

        --
        O friend and companion of night, thou who rejoicest in the baying of dogs {here a hideous howl burst forth}...
    • (Score: 2) by HiThere on Thursday October 19 2017, @07:09PM (5 children)

      by HiThere (866) Subscriber Badge on Thursday October 19 2017, @07:09PM (#584765)

      Sorry, but I think you're highly overenthusiastic about this. It's a significant step along the way, but I still don't expect the Singularity before 2030...and 2035 wouldn't surprise me.

      OTOH, this *is* a significant step along the way. So are chips specialized for neural net computers being in mass production. (But I bet they discover problems with the first generation of chip.)

      Alpha Go seems to have mastered nearly completely one aspect of a "general intelligence" program. But it's a specialized part...abstract pattern recognition in 2D space with known boundaries and rules. Now that's not a small part of what intelligence is, but it's a long way from the whole thing. A true general intelligence would start off not knowing how many dimensions the problem existed in, what the rules were, etc. and derive those from a method of sensing state and recognition of goal achievement. And one shouldn't be surprised if it derives a totally different set of rules than humans use, though it might be difficult to recognize that as the operations possible to perform should be the same. (We are talking about learning to play Go.)

      But note that the true general intelligence would start off not even recognizing the board or the pieces. They would be derived from experience. So would the idea of "a game", or, more particularly, "a game of Go". This means it's got to observe go being played, and notice that when it models that activity it receives a simulation of the desired reward.

      There's an old saying in programming that whenever you don't think the language is flexible enough to do what you want you just need to add another level of indirection. That's sort of what I'm considering here, although as I look at it, it looks like more nearly two or three additional layers of indirection.

      --
      Put not your faith in princes.
      • (Score: 2) by turgid on Thursday October 19 2017, @07:14PM (2 children)

        by turgid (4318) on Thursday October 19 2017, @07:14PM (#584771) Journal

        To me it sounds like an engineering problem now, ie one of scale, not of science.

        --
        Don't let Righty keep you down. #freearistarchus!!!
        • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @07:39PM (1 child)

          by Anonymous Coward on Thursday October 19 2017, @07:39PM (#584799)

          No, because it doesn't cover any featurelist on any fairly comprehensive theory of mind. While it covers (a limited form of) (contextually restricted) perception and it has (well-defined) (verifiable) motivations and (strongly constrained) (contextually restricted) capabilities for action, and if you close one eye and squint with the other you can pretend that it has some degree of imagination, there's no reason to believe on any level that it has a degree of self-awareness on a par with that of a mouse, nor that it is equipped to achieve that.

          Basically, this is in the hierarchy of intelligences a little above the cockroach level. If that. And that's a structural issue, not one of scale.

          IAAAIR

          TL;DR: It's a cute science project, but giving it MOAR SINAPSEZ won't make it differently intelligent.

          • (Score: 2) by takyon on Thursday October 19 2017, @08:09PM

            by takyon (881) Subscriber Badge <{takyon} {at} {soylentnews.org}> on Thursday October 19 2017, @08:09PM (#584829) Journal

            I agree. Google's TPUs, Intel's Nervana, Nvidia's Tesla, etc. are just accelerators for 8-bit machine learning. They are capable of increasing the performance (and perf/Watt) of machine learning tasks by a lot compared to previous chips, but even if they got a hundred times better and passed the Turing test it couldn't be called real intelligence.

            If real "strong AI" is going to come from any hardware, it will likely be using neuromorphic designs such as IBM's TrueNorth or this thing [soylentnews.org]. What's more, these designs use so little power that they could be scaled up to 3D without the same overheating issues, allowing more artificial "synapses" and even greater resemblance to the human brain. If that approach stalls out, you simply need to connect several of them with a high-speed interconnect.

            We're not far off from making that happen. We're reaching the limits of Moore's law (as far as we know) and 3D NAND chips are commonplace. I wouldn't be surprised if it "strong AI" is about to be created [nextbigfuture.com] or already has been created, and corporations or the govt/military are keeping it under wraps to continue development while avoiding IP issues and the inevitable public/ethics debates. I doubt it would be hard to find comrades for a domestic terrorist group hell bent on destroying the technology by any means necessary.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by takyon on Thursday October 19 2017, @07:40PM (1 child)

        by takyon (881) Subscriber Badge <{takyon} {at} {soylentnews.org}> on Thursday October 19 2017, @07:40PM (#584800) Journal

        I still don't expect the Singularity before 2030...and 2035 wouldn't surprise me.

        That's an aggressive prediction. Even Ray Kurzweil, Prophet of the Singularity, says 2045 for the Singularity. He has 2029 as a date for (strong?) AI passing the Turing test, but still says [futurism.com] 2045 for the Singularity.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by HiThere on Thursday October 19 2017, @11:38PM

          by HiThere (866) Subscriber Badge on Thursday October 19 2017, @11:38PM (#584992)

          Welll.... it's also true that different people mean different things when they say "the Singularity". I don't think human level AI will necessarily be able to pass the Turing test...and I know a number of people who couldn't. The Turing test depends too heavily on the judges to have much significance.

          FWIW, it's already true that in some areas AI's are superhuman, but they aren't yet generalized enough. And I don't think an actual "general AI" is even possible. And I don't think humans are "general intelligence". Humans have a lot of areas where they are smart, and a lot of areas where they can be trained to be smart, but I don't believe that we cover the spectrum of possible intelligences. To be specific, it's my belief that in areas that require dealing simultaneously with more than seven variables people are essentially untrainable. Possibly seven is slightly too low a number, there may be some people who can be trained to handle that, but that's my current estimate. However, switch it to 17 and I doubt there would be many who would claim to be able to handle it...and they'd all be obviously deluded. So we're just talking about (by analogy) maximum stack depth.

          Once you get rid of the idea that a general intelligence is even possible you start noticing that in a lot of places it isn't even desirable. It's better to have communicating simpler processes. And we don't know just how much can be done at that level even using the tools that already exist.

          So as I said, I expect the Singularity in the period 2030-2035. And it won't look like any of the predictions.

          --
          Put not your faith in princes.
  • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @03:05PM (1 child)

    by Anonymous Coward on Thursday October 19 2017, @03:05PM (#584572)

    AI masturbation?

    • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @03:18PM

      by Anonymous Coward on Thursday October 19 2017, @03:18PM (#584579)

      HAHAHzAHA HvAHAHA HaA HAgHA HdAHAH HAHqAHAHA HeAHAHA HA HjAHA HAHkAH ha ha haaxhaaHAHAHAHA HAHtAHA HA HAHA HAHtAH HAHAtHAHA HAzHAHA HqA HtAHA HyAHAH ha ha harahaa HAHrAHAHA HxAHAHA HA HnAHA HAHAmH HAHAHgAHA HAHAHjA

      HA HAHA HAHAH ha ha haahaa haa kek kah HAHAHAH GAFAH BAAA aaabaaa AAAaa HAHAHAHAHAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaaHAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa HAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa haa kek kah HAHAHAH GAFAH BAAA

      aaaaa AAAaa HAHAHAHAHAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaaHAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa HAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa haa kek kah HAHAHAH GAFAH BAAA aaaaa AAAaa HAHAHAHAHAHAHAHA HAHAHA HA HAHA HAHAH

      HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaaHAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa HAHAHAHA HAHAHA HA HAHA HAHAH HAHAHAHA HAHAHA HA HAHA HAHAH ha ha haahaa haa kek kah HAHAHAH GAFAH BAAA aaaaa AAAaa HAHAHAHA

      heee heee heeee

      teee taaa haaa

      ZOO ZEEE HOOO DOOO DEEE doooo BEE BOP booo HAAAAAA!

  • (Score: 3, Interesting) by looorg on Thursday October 19 2017, @03:21PM (17 children)

    by looorg (578) on Thursday October 19 2017, @03:21PM (#584582)

    Though self-play AlphaGo Zero even discovered for itself, without human intervention, classic moves in the theory of Go ...

    OK so it breaks their Tabula Rasa condition but it would still have been somewhat more impressive, and useful, if it had discovered things we didn't already know. Seems they could just have skipped these steps if the program was just taught what we already knew before it started instead of "rediscovering" what we already know.

    There doesn't seem to be a lot of deepmind or actual deep thinking going on here at all, from the human inventors sure but from the program? Not really. So DeepMind can come back when they actually do something we, as humans, have not done before, just doing it faster isn't really a very impressive feature when it comes to software, after all everything will usually by default just become faster as there are hardware improvements. Overall this is just like reading that version 2.0 is a superior product to version 1.0, which is normally how things go.

    it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules. Furthermore, a pure reinforcement learning approach requires just a few more hours to train

    In our evaluation, all programs were allowed 5 s of thinking time per move;

    How many games in a few hours? The speed of which the computer/program doesn't is really an interesting and comparative factor since it will always be faster then a human (5 seconds is probably barely enough for a human to think thru their options -- not including moving actual pieces on to the board). Previously it's been discussed how many hours it takes for a human to become a master at some task, 10k or whatever -- it's probably not even true. A Go game is how long? Just for simplicities sake lets say it lasts half an hour, so two games per hour so about 20k games. A computer computes that quite a lot faster then 10k hours. So they are just faster, something that is not in doubt.

    In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.

    Novel strategies ... Sounds like they won't be very useful. Have they previously been discovered by man and dismissed due to their level of novelty?

    • (Score: 0, Insightful) by Anonymous Coward on Thursday October 19 2017, @03:30PM (2 children)

      by Anonymous Coward on Thursday October 19 2017, @03:30PM (#584588)

      I swear. You people try to take a shit on everything.

      • (Score: -1, Troll) by Anonymous Coward on Thursday October 19 2017, @04:54PM (1 child)

        by Anonymous Coward on Thursday October 19 2017, @04:54PM (#584660)

        What do you mean "you people"??? Are you assuming my political identity? You should realize by now that Libertarian != alt right nazi, no matter how hard some dimwits on this site wrongly call themselves.

        • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @06:50PM

          by Anonymous Coward on Thursday October 19 2017, @06:50PM (#584736)

          I think AC meant "assholes" and not any political affiliation. I wasn't sure at first but your post solidified my conclusion.

    • (Score: 2) by vux984 on Thursday October 19 2017, @03:46PM (3 children)

      by vux984 (5045) on Thursday October 19 2017, @03:46PM (#584607)

      it would still have been somewhat more impressive, and useful, if it had discovered things we didn't already know

      "In the space of a few days, starting tabula rasa, AlphaGo Zero was able to [...discover...] novel strategies that provide new insights into the oldest of games."

      Did you find that somewhat more impressive and useful?

      Novel strategies ... Sounds like they won't be very useful.

      Close minded much?

      • (Score: 2) by looorg on Thursday October 19 2017, @04:08PM (2 children)

        by looorg (578) on Thursday October 19 2017, @04:08PM (#584629)

        Since, as far as I can tell, they don't go into what these novel or new strategies are it's somewhat hard to go into it. They found something, they claim to be new or novel, that doesn't mean it's actually going to be useful for human beings playing Go. Novel as far as I know doesn't mean useful or great, just new and not like anything seen before. Might just be so utterly useless it has been written off by humans or it's something that requires a form of thinking humans can't or won't do, certainly not within their allotted 5-second rule. So it might be interesting, it might be great. But it could also be utterly useless unless you have X TPU:s hardwired into your brain to make use of them. I'm sure it was great for their software but for the game? And for humans? Probably not so much.
        This is in general the issue or problem, I have, with AlphaGo. It hasn't really done anything, except to play games. But it can't apply any of that knowledge to anything that is remotely useful, or anything that isn't Go, as of yet - things will hopefully change eventually. But the current AlphaGo ego stroke is really not all that awesomesauce they claim it to be.

        • (Score: 4, Insightful) by vux984 on Thursday October 19 2017, @06:31PM (1 child)

          by vux984 (5045) on Thursday October 19 2017, @06:31PM (#584712)

          Novel as far as I know doesn't mean useful or great

          The fact that the strategies were discovered and reinforced as valid by repeated play does tell us the AI found them useful.

          Might just be so utterly useless it has been written off by humans or it's something that requires a form of thinking humans can't or won't do

          Now we're just moving the goal posts. "So, it found a new strategy that had never been formally recognized that it is actively using to help it win, well... I'll only be impressed if humans can use it!"
          Frankly, take it as a small compliment to the human race that that it didn't find 'one weird trick that always wins' that we'd somehow missed for a few thousand years. Seriously what did you EXPECT?

          I'm sure it was great for their software but for the game? And for humans? Probably not so much.

          Are the goalposts even on the field anymore? They set out to beat humans at go with a machine, something that was only recently projected to be something still a long ways away. And they succeeded, decisively. That's impressive. Now the new generation requires only a fraction of the hardware and resources the previous generation needed, and not only still cleans up humans, but also cleans up the previous system. That's impressive.

          This is in general the issue or problem, I have, with AlphaGo. It hasn't really done anything, except to play games.

          That's precisely the task for which it was made.

          But the current AlphaGo ego stroke is really not all that awesomesauce they claim it to be.

          What claim did they make that you are so offended by?
          Go was considered something that couldn't be defeated by machine until very recently. It was considered that the immense number of possible moves, and the difficulty that even humans had at quantifying the strength of a move made it a difficult problem. I'm very impressed they solved it. I don't think for a second that this means we're on the brink of a sentient AI, but I'm still impressed.

          • (Score: 2) by rylyeh on Thursday October 19 2017, @10:34PM

            by rylyeh (6726) <kadathNO@SPAMgmail.com> on Thursday October 19 2017, @10:34PM (#584953)

            People who play Go know that there are few, if any, truly novel openings that have not been analyzed long ago.
            For AG to discover Any new openings that work is, itself, amazing. Remember the Bobby Fischer opening for Chess? [URL:https://en.wikipedia.org/wiki/Bobby_Fischer/]

            --
            O friend and companion of night, thou who rejoicest in the baying of dogs {here a hideous howl burst forth}...
    • (Score: 2, Touché) by Anonymous Coward on Thursday October 19 2017, @03:47PM (3 children)

      by Anonymous Coward on Thursday October 19 2017, @03:47PM (#584610)

      OK so it breaks their Tabula Rasa condition but it would still have been somewhat more impressive, and useful, if it had discovered things we didn't already know.
      [...]
      Novel strategies ... Sounds like they won't be very useful. Have they previously been discovered by man and dismissed due to their level of novelty?

      First you say novel strategies would be useful, then that they "won't be very useful". Which is it?

      • (Score: 2) by looorg on Thursday October 19 2017, @04:11PM (2 children)

        by looorg (578) on Thursday October 19 2017, @04:11PM (#584631)

        It clearly depends on the strategy. It would have been awesome if they had actually mentioned what it was. For all we know their new strategy is completely worthless for human players. Then what is it good for? When AlphaGo-1 tries to play AlphaGo-2 and they try and trick eachother?

        • (Score: 2) by HiThere on Thursday October 19 2017, @07:13PM (1 child)

          by HiThere (866) Subscriber Badge on Thursday October 19 2017, @07:13PM (#584770)

          Why should you expect it to be useful for human players? Perhaps it's a strategy that's only useful when you're playing against something better than any human player. Not that I expect this to be true, but your basic criterion seems to need justification.

          --
          Put not your faith in princes.
    • (Score: -1, Redundant) by Anonymous Coward on Thursday October 19 2017, @03:59PM

      by Anonymous Coward on Thursday October 19 2017, @03:59PM (#584624)

      I swear. You people try to take a shit on everything.

    • (Score: -1, Redundant) by Anonymous Coward on Thursday October 19 2017, @04:56PM (2 children)

      by Anonymous Coward on Thursday October 19 2017, @04:56PM (#584661)

      I swear. You people try to take a shit on everything.

      • (Score: 2) by DeathMonkey on Thursday October 19 2017, @05:31PM (1 child)

        by DeathMonkey (1380) on Thursday October 19 2017, @05:31PM (#584683) Journal

        I was considering modding the first instance up but then you had to go and take a shit on your own comment.

        • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @05:45PM

          by Anonymous Coward on Thursday October 19 2017, @05:45PM (#584689)

          ... I don't depend on your mod points.

    • (Score: 2) by DannyB on Thursday October 19 2017, @06:59PM

      by DannyB (5839) Subscriber Badge on Thursday October 19 2017, @06:59PM (#584752)

      it would still have been somewhat more impressive, and useful, if it had discovered things we didn't already know.

      If it did already, would we be able to recognize it?

      The reason we know it discovered various game playing tragedies that we have names for, is because we recognize those particular game tragedies. How would we recognize that it discovered a hence unknown and unnamed game tragedy?

      . . . as well as novel strategies that provide new insights into the oldest of games.

      Novel strategies ... Sounds like they won't be very useful. Have they previously been discovered by man and dismissed due to their level of novelty?

      Did you catch the part about providing new insights?

      Maybe these novel tragedies the machine discovered, which provide new insights, are indeed useful and represent an advancement in human knowledge.

      --
      ALL LIABILITY IS EXPRESSLY DISCLAIMED FOR PERSONAL INJURY OR DEATH THAT RESULTS FROM READING THE SOURCE CODE.
    • (Score: 2, Interesting) by Meepy on Friday October 20 2017, @02:26PM

      by Meepy (2099) on Friday October 20 2017, @02:26PM (#585238)

      As someone who plays and watches go regularly, I can tell you that the alphago discoveries have had a massive influence on the game. You almost can't find a review of professional games where they don't comment "This is a move alphago would make." If you're interested, there are several english language series exploring the new ideas found. I expect this new version will make even more contributions

  • (Score: 0) by Anonymous Coward on Thursday October 19 2017, @09:58PM

    by Anonymous Coward on Thursday October 19 2017, @09:58PM (#584926)

    Now teach it to play mahjong and dominos with more than two players.

  • (Score: 0) by Anonymous Coward on Friday October 20 2017, @11:12AM

    by Anonymous Coward on Friday October 20 2017, @11:12AM (#585177)

    ... that Google Plus didn't make Google obsolete.

(1)