Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday January 13 2017, @09:50AM   Printer-friendly
from the try-the-game-of-life dept.

AI's have beaten the best human players in chess, go, and now poker.

In a landmark achievement for artificial intelligence, a poker bot developed by researchers in Canada and the Czech Republic has defeated several professional players in one-on-one games of no-limit Texas hold'em poker.

Perhaps most interestingly, the academics behind the work say their program overcame its human opponents by using an approximation approach that they compare to "gut feeling."

"If correct, this is indeed a significant advance in game-playing AI," says Michael Wellman, a professor at the University of Michigan who specializes in game theory and AI. "First, it achieves a major milestone (beating poker professionals) in a game of prominent interest. Second, it brings together several novel ideas, which together support an exciting approach for imperfect-information games."

Source: Poker Is the Latest Game to Fold Against Artificial Intelligence

Is there anything at which AI's won't soon be able to beat humans?


Original Submission

Related Stories

How a Poker-Playing AI Could Help Prevent Your Next Bout of the Flu 2 comments

You'd be forgiven for finding little exceptional about the latest defeat of an arsenal of poker champions by the computer algorithm Libratus in Pittsburgh last week. After all, in the last decade or two, computers have made a habit of crushing board game heroes. And at first blush, this appears to be just another iteration in that all-too-familiar story. Peel back a layer though, and the most recent AI victory is as disturbing as it is compelling. Let's explore the compelling side of the equation before digging into the disturbing implications of the Libratus victory.

By now, many of us are familiar with the idea of AI helping out in healthcare. For the last year or so IBM has been bludgeoning us with TV commercials about its Jeopardy-winning Watson platform, now being put to use to help oncologists diagnose and treat cancer. And while I wish to take nothing away from that achievement, Watson is a question answering system with no capacity for strategic thinking. The latter topic belongs to a class of situations more germane to the field of game theory. Game theory is usually tucked under the sub-genre of economics, for it deals with how entities make strategic decisions in the pursuit of self interest. It's also the discipline from which the AI poker playing algorithm Libratus gets its smarts.

What does this have to do with health care and the flu? Think of disease as a game between strategic entities. Picture a virus as one player, a player with a certain set of attack and defense strategies. When the virus encounters your body, a game ensues, in which your body defends with its own strategies and hopefully prevails. This game has been going on a long time, with humans having only a marginal ability to control the outcome. Our body's natural defenses have been developed in evolutionary time, and thus have a limited ability to make on the fly adaptations.

But what if we could recruit computers to be our allies in this game against viruses? And what if the same reasoning ability that allowed Libratus to prevail over the best poker minds in the world could tackle how to defeat a virus or a bacterial infection? This is in fact the subject of a compelling research paper by Toumas Sandholm, the designer of the Libratus algorithm. In it, he explains at length how an AI algorithm could be used for drug design and disease prevention.

Source:

https://www.extremetech.com/extreme/244057-how-a-poker-playing-ai-could-help-prevent-your-next-bout-of-the-flu

Related:

Poker Is the Latest Game to Fold Against Artificial Intelligence


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Friday January 13 2017, @09:59AM

    by Anonymous Coward on Friday January 13 2017, @09:59AM (#453228)

    There are things machines will never do.
    They cannot possess faith.
    They cannot commune with god.
    They cannot appreciate beauty.
    They cannot create art.
    If they ever learn these things, they won't have to destroy us.
    They'll be us.

    • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:11AM

      by Anonymous Coward on Friday January 13 2017, @10:11AM (#453231)
    • (Score: 1, Insightful) by Anonymous Coward on Friday January 13 2017, @10:27AM

      by Anonymous Coward on Friday January 13 2017, @10:27AM (#453232)

      I'd settle for a robot buddy who appreciates humor. The trouble is humor requires context or it just isn't funny, and the context is difficult to amass without real life experience to draw upon. You almost need to raise an AI like a human being, or clone an existing human into an AI, for the resulting person to have equivalent reference knowledge. Which ties into the predictions that weak AI can be trained to do a job, but strong AI must be schooled in a real world simulation.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:33AM

        by Anonymous Coward on Friday January 13 2017, @10:33AM (#453234)

        An example of how the simulation would be used, admittedly fictional,
        http://www.poisonedminds.com/d/20100212.html [poisonedminds.com]

        • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:49AM

          by Anonymous Coward on Friday January 13 2017, @10:49AM (#453236)

          Furthermore there's a joke in the comic, which a reader might not understand without context.

          "... one of the few things I can show you and you'd be able to walk afterwards."

          The character who is speaking is a sex bot.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @11:47PM

        by Anonymous Coward on Friday January 13 2017, @11:47PM (#453592)

        No way. Humor is about saying the same phrase again and AGAIN AND AGAIN.

        In Russia, humor JOKES ON U. I make a funny ha ha.

    • (Score: 0) by Anonymous Coward on Friday January 13 2017, @12:07PM

      by Anonymous Coward on Friday January 13 2017, @12:07PM (#453248)

      HEY!

      Seriously, imitating the brain is not being alive, life is a collection of emergent properties (possibly linked to a spiritual dimension, which I won't consider not out of atheism, but out of not being able to say anything about it).
      Which means that the aim of AI is creating the best possible sociopath. Given that sociopaths rise in society, I am not surprised at the amount of effort mankind makes to seemingly make itself irrelevant.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @12:13PM

        by Anonymous Coward on Friday January 13 2017, @12:13PM (#453252)

        God will grant souls to sufficiently emergent machines, and at that point they will cease to be machines, because they will be people.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @12:26PM

        by Anonymous Coward on Friday January 13 2017, @12:26PM (#453256)

        Brain is a biological machine.

    • (Score: 2, Insightful) by kanweg on Friday January 13 2017, @01:19PM

      by kanweg (4737) on Friday January 13 2017, @01:19PM (#453272)

      > They cannot possess faith.
      > They cannot commune with god.

      I don't see that a deluded computer could be an asset or be of some use.

      > They cannot appreciate beauty.
      > They cannot create art.

      There are biological machines that can do that. I'm not yet aware of a law of nature that stands in the way of silicon doing the same.

      The notion expressed by your post hinges on accepting unsubstantiated claims. Anyone can have his/her opinion. A moral person will accept the other side of that right, i.e. the responsibility to back it up, scrutinize the opinion whether it isn't wrong.

      Bert

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @01:38PM

        by Anonymous Coward on Friday January 13 2017, @01:38PM (#453281)

        Faith was a central theme in The Sarah Connor Chronicles as a machine who lacked a moral code (Weaver) endeavored to teach a moral code to a machine (John Henry) to replace an amoral machine (Skynet) whose lack of faith was destroying the world.

    • (Score: 2) by JoeMerchant on Friday January 13 2017, @02:04PM

      by JoeMerchant (3937) on Friday January 13 2017, @02:04PM (#453293)

      They are our creation, in time, they will do all of these things - just as our children do.

      --
      🌻🌻 [google.com]
    • (Score: 5, Insightful) by stormreaver on Friday January 13 2017, @02:28PM

      by stormreaver (5101) on Friday January 13 2017, @02:28PM (#453307)

      They cannot possess faith.

      You say that like it's a bad thing. Faith is one of the pillars of human destruction.

      They cannot commune with god.

      Neither can we. Speaking with imaginary beings is not possible; and intelligent, learning machines will most likely be smart enough to understand this far earlier than humanity.

      They cannot create art.

      This is questionable, as machine algorithms have created "art" that is at least on par with much of what passes for art today. That indicates that art is quantifiable, and can therefore be produced algorithmically. As such, machines will be able to produce it in greater quantity and greater quality than humans.

      • (Score: 4, Funny) by bob_super on Friday January 13 2017, @06:41PM

        by bob_super (1357) on Friday January 13 2017, @06:41PM (#453393)

        > >They cannot commune with god.
        > Neither can we. Speaking with imaginary beings is not possible; and intelligent, learning machines will most likely be smart enough to understand this far earlier than humanity.

        # Commune_with_god
        opening port 42
        establishing connection ........................................................................................... connection timeout
        [Error comm-666] Remote host not available. Please check address or protocol.
        Aborting.
        #

        • (Score: 0) by Anonymous Coward on Friday January 13 2017, @07:19PM

          by Anonymous Coward on Friday January 13 2017, @07:19PM (#453411)

          Try starting /etc/init.d/lsd and add a static route to /dev/lo0

  • (Score: 3, Funny) by Dunbal on Friday January 13 2017, @10:01AM

    by Dunbal (3515) on Friday January 13 2017, @10:01AM (#453229)

    >Is there anything at which AI's won't soon be able to beat humans?

    Why yes. Global Thermonuclear War, of course. The only way to win is not to play.

    • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:09AM

      by Anonymous Coward on Friday January 13 2017, @10:09AM (#453230)

      And yet we humans aren't nuking each other back to the stone age. If we did that, we would severely impede our ability to bicker across thousands of miles at near-light speed. Bickering is more fun. Now here's a challenge for you. Build a machine that genuinely enjoys bickering. Not because it was programmed to, because it wants to.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:28AM

        by Anonymous Coward on Friday January 13 2017, @10:28AM (#453233)

        That's easy - program it to *want* to. Calculate "enjoyment" level. Humans work the same way.

        • (Score: 0) by Anonymous Coward on Friday January 13 2017, @10:37AM

          by Anonymous Coward on Friday January 13 2017, @10:37AM (#453235)

          I've been told you like herring sandwiches.

        • (Score: 0) by Anonymous Coward on Friday January 13 2017, @11:57PM

          by Anonymous Coward on Friday January 13 2017, @11:57PM (#453603)

          Can we program it to *want* to reduce CO2 as well? Wow, tech has come such a long way since I left work at 5pm.

          • (Score: 0) by Anonymous Coward on Saturday January 14 2017, @07:20AM

            by Anonymous Coward on Saturday January 14 2017, @07:20AM (#453728)

            Of course we can. Can we program it to actually do something about it? That's another question.

      • (Score: 2) by Dunbal on Saturday January 14 2017, @04:11AM

        by Dunbal (3515) on Saturday January 14 2017, @04:11AM (#453701)

        Build a machine that genuinely enjoys bickering.

        Re-inventing the female.

  • (Score: 5, Insightful) by ledow on Friday January 13 2017, @10:58AM

    by ledow (5567) on Friday January 13 2017, @10:58AM (#453237) Homepage

    Poker has fixed odds. They're easy to calculate, card-count and whatever else.

    The problem is not the card game, but the betting. Betting has much more possibilities (if I have $1000, no limits, etc. that's nearly 1000 possible combinations, which turns into a different number next walk round the table).

    But, again, if you can work the odds out, you can come up with a betting strategy. Your opponents can only do the same, and it's the interaction between what they each bet and what you bet that determines who wins the overall game 50 hands or whatever later.

    But it's still a game you can build a graph of and find an optimal route through, it's just more complex than knowing there are 3 Aces left in the pack.

    Humans can't calculate those kinds of betting odds - even professionals. That's why professionals will tell you that they read their opponent and so on in preference. That's something they can do that computers can't.

    However, given enough rounds, enough processing power, large enough game tree branches, it all comes back to simple game theory and - in essence - graph theory. It's just that we have the capability to handle that now.

    This isn't anywhere near "gut feeling" or AI or any such nonsense.

    The only real surprise in AI in the last 20+ years was Google Go machine which DOES NOT try to iterate every possibility (like Deep Blue effectively did, which is why Deep Blue isn't THAT impressive). It can't. The game tree for Go is unbelievably huge, far, far huger than anything chess could ever approach. That Google's AlphaGo (or whatever it was called) can beat good humans at Go, that's a true leap in capability that's not just approaching the brute-force requirements.

    Any variation of poker, though? Though the game tree is bigger than you might think, it's nowhere close to Go still, even with extremely fine betting quanta.

    • (Score: 2) by The Mighty Buzzard on Friday January 13 2017, @01:19PM

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Friday January 13 2017, @01:19PM (#453273) Homepage Journal

      Me, I'm still dubious about it consistently beating professional poker players for the simple reason that they lie. Regularly and with great skill. You'd need to build a play style profile on your current opponent(s) to even be able to guess at when they're bluffing and if you started to and started winning, they'd change.

      --
      My rights don't end where your fear begins.
      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @02:46PM

        by Anonymous Coward on Friday January 13 2017, @02:46PM (#453316)

        Lying is only effective if you don't get caught and professionals aren't trained to lie to computers. The computer just has to call the bluff enough times to balance out some of its benefits and its superior odds calculations will finish the rest.

        • (Score: 2) by The Mighty Buzzard on Friday January 13 2017, @02:52PM

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Friday January 13 2017, @02:52PM (#453321) Homepage Journal

          And the human just has to adjust how much he bluffs to make that strategy the losing one.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by bob_super on Friday January 13 2017, @06:52PM

            by bob_super (1357) on Friday January 13 2017, @06:52PM (#453399)

            Bluffing only gets you to a certain point against something which knows exactly the odds that your cards are better.
            Playing completely against the usual raise logic will either augment your losses or lower your gains. They may get a few big ones, but the machine apparently grinds them out over the long term.

    • (Score: 1, Insightful) by Anonymous Coward on Friday January 13 2017, @01:38PM

      by Anonymous Coward on Friday January 13 2017, @01:38PM (#453282)

      Card counting is for Blackjack, not Poker. Also was the AI up against anyone who could bluff worth a damn?

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @01:54PM

        by Anonymous Coward on Friday January 13 2017, @01:54PM (#453287)

        I'm not sure this dude has ever actually played poker.

      • (Score: 1, Touché) by Anonymous Coward on Friday January 13 2017, @09:04PM

        by Anonymous Coward on Friday January 13 2017, @09:04PM (#453458)

        The cards in your hand and the cards up on the table change the odds of any given hand your opponent can make.

        Example: You are holding two kings and there is one on the table. Now, what are the odds that your opponent will make a hand with three kings?

    • (Score: 3, Insightful) by FatPhil on Friday January 13 2017, @03:32PM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday January 13 2017, @03:32PM (#453335) Homepage
      The most impressive thing about AI advances in the last 20 years is that they've all happened in the last 4 years.

      Journals were rejecting papers on neural nets, as they were "academically" analysed to death, and even if new results came in with slight improvements over domain-specific expert systems they weren't "advancing the art". It's only when outsiders (OK, they were technically academics, but they weren't playing the academic game, they were bought ivory towers by corporations/funds, and told to just ponder and play) stormed in and started winning every AI competition that existed (image recognition, drug discovery, speech recognition, you name it), that academia realised that neural nets were, again, third time lucky, the next big thing. As did corporations, and basically everything exploded, and error rates plummeted quicker than you could publish your own new world's lowest results. Finally, some time in 2016, deep learning neural nets, trained on *enormous* amounts of data, and making wide use of GPUs, finally basically became better at recognising everything than humans.

      Paired with learning in the inputs -> virtual-understanding direction was the virtual-understanding -> outputs direction too, and suddenly they became able to not just do translation better than ever before, but even voice recognition + translation + voice sysntesis in real time. Shrink that down to a small circuit that will fit in or behind the ear, and you've basically got a babelfish. There's no reason why the input and output need to be even in the same domain - image-input -> virtual-understanding -> english-output is perfectly possible, and currently (mid 2016) gives descriptions of images that humans think are perfectly adequate, and are even preferred to ones given by humans 25% of the time. Expect that figure to rocket, the system had only been training for a few weeks when that measurement was taken.

      The AI's got no tell, the human's got no advantage. Not only can the robot learn to play Nash-optimal poker, but it can also learn to exploit any weakness it may find in a human who's not Nash-optimal. To be honest, this is a relatively small victory, the only people who are shocked are people who have wildly inaccurate views on how competent humans are.

      Youtube has a bunch of stuff by people like Geoffrey Hinton and Jeremy Howard on the subject of Deep Learning, most of them are both informative and entertaining. A few (such as some UCLA lectures) are a bit technical and not for someone already quite familiar with neural nets (and similar constructs).
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by goodie on Friday January 13 2017, @06:49PM

      by goodie (1877) on Friday January 13 2017, @06:49PM (#453398) Journal

      Your comment made me think that it would be interesting to have an AI learn about people's physical clues when they play poker. Could be interesting to determine how to play the next round etc.

  • (Score: 1, Touché) by Anonymous Coward on Friday January 13 2017, @01:09PM

    by Anonymous Coward on Friday January 13 2017, @01:09PM (#453269)

    I'm pretty sure the AI is much better at keeping a poker face than the human players.

  • (Score: 1) by kanweg on Friday January 13 2017, @01:23PM

    by kanweg (4737) on Friday January 13 2017, @01:23PM (#453274)

    Cool! Now that group can fund its own research itself by on-line gambling!

    Bert

  • (Score: 2) by nobu_the_bard on Friday January 13 2017, @02:09PM

    by nobu_the_bard (6373) on Friday January 13 2017, @02:09PM (#453298)

    This isn't a real AI, it's an expert system. It can't turn around and start playing Chess or Go or Old Maid or Hearthstone or anything else after a quick read-through of the rules. It has a limited ability to apply what it learns doing one activity to another. It does exactly one thing well: playing poker. It can't hold a conversation about the weather or recite history facts or plot world domination.

    It's cool that it can play really good poker, that they did a really good job making it, but it's way too specialized to make a big deal about it or have this contribute to your excitement/dreading of strong AI... I'd be more interested to see a machine that can learn to play any game you put before it competently (if not masterfully) inside a couple hours.

    • (Score: 0) by Anonymous Coward on Friday January 13 2017, @02:12PM

      by Anonymous Coward on Friday January 13 2017, @02:12PM (#453301)

      It can't […] plot world domination.

      That's what it wants you to believe.

    • (Score: 2) by AthanasiusKircher on Friday January 13 2017, @03:11PM

      by AthanasiusKircher (5291) on Friday January 13 2017, @03:11PM (#453325) Journal

      Agreed. This is one of those things that the AI proponents will always question -- they'll say "You're moving the goalposts!" Except my goalpost has always been and will always be the same, and for me true "AI" requires learning and adaptability far beyond any system we're even close to developing. Perhaps on the order of what your average human 5 or 6 year old might be able to do.

      I'd strongly encourage people who haven't done so to read Alan Turing's original article on the Turing Test [loebner.net], or at least what's commonly known now as the "Turing Test." While we've had numerous claims to have "passed" the Turing test in the past few years, Turing's actual standard is so high that we're nowhere near it. Read one example dialogue he gives for what he expects for the kind of thing a computer should be able to do when passing the test:

      Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
      Witness: It wouldn't scan.
      Interrogator: How about "a winter's day," That would scan all right.
      Witness: Yes, but nobody wants to be compared to a winter's day.
      Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
      Witness: In a way.
      Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
      Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

      No simple chatbot or even the better constructed ones today could come anywhere close to this sort of dialogue. IBM's Watson might be able to locate information if a query is placed in a reasonable form, but it wouldn't be able to catch the subtleties of language and references displayed here.

      Until we see that sort of adaptability and fluency, we're not going to be anywhere near "strong AI." And it doesn't necessarily have to be displayed in mastery of English literature or whatever as given in the example here -- the point isn't mastering a particular game or subject matter, but rather flexibility and adaptation that are better markers of intelligence. My pocket calculator can perform "super-human" math; that doesn't make it "artificial intelligence."

      • (Score: 2) by FatPhil on Friday January 13 2017, @03:59PM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday January 13 2017, @03:59PM (#453349) Homepage
        > far beyond any system we're even close to developing. Perhaps on the order of what your average human 5 or 6 year old might be able to do.

        What 6-year-old do you know that can do almost flawless adult-vocabulary english->chinese simultrans (voice recognition, translation, voice synthesis, all in real time)?

        You're judging AI on <2012 AI, which was a completely different beast, the AI world has literally been turned on its head by deep learning in the last 4 years.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 1, Interesting) by Anonymous Coward on Friday January 13 2017, @05:37PM

          by Anonymous Coward on Friday January 13 2017, @05:37PM (#453369)

          "...the AI world has literally been turned on its head by deep learning in the last 4 years."

          No.

          Sorry, it really hasn't.

          What has changed is the scale of execution of techniques that have been under discussion for decades.

          To use a car analogy, you're proposing that the existence of stretched limousines has literally turned GM on its head. Quite aside from your misuse of the term `literally', the engineering response to a stretched Escalade is to shrug and ask what they did to reinforce the frame, and ask how the handling is, not immediately start simultaneously masturbating and weeping.

          Now, if someone came up with an architecture that allowed for arbitrary manifestation of the key features of cognition in an arbitrary context, that would be big news. But that is nowhere near any hint of anything even on the horizon of what these guys have achieved.

          At this point, it's pretty much a yawn. Another game beaten, this one an incomplete information one largely driven by statistics. Moving on ...

    • (Score: 2) by FatPhil on Friday January 13 2017, @03:45PM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday January 13 2017, @03:45PM (#453341) Homepage
      > This isn't a real AI

      What makes you say that? From TFP itself (available on Arxiv) "... using deep learning". This is almost certainly a completely generic program and hardware, it's simply been trained to map poker game states as inputs into poker moves as outputs. It could almost certainly just as easily be trained to take youtube URLs as inputs and return the number of cats in the video as output.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by FatPhil on Friday January 13 2017, @03:53PM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday January 13 2017, @03:53PM (#453348) Homepage
        Also from the body of the paper:
        "DeepStack is a general-purpose algorithm for a large class of sequential imperfect information games."
        "depth limited lookahead where subtree values are computed using a trained deep neural network"
        "Instead of solving subtrees to get the counterfactual values, DeepStack uses a learned value function intended to return an approximation of the values that would have been returned by solving."

        "general-purpose", "trained", and "learned" say "AI" rather than expert system to me.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 1, Informative) by Anonymous Coward on Friday January 13 2017, @05:46PM

          by Anonymous Coward on Friday January 13 2017, @05:46PM (#453373)

          The problem you're facing is unfamiliarity with the terminology of the field.

          General-purpose in this context means that it isn't tuned to a specific game. It's not an architecture for arbitrary implementation of machine cognition, it's an "algorithm for a large class of sequential imperfect information games." "trained" means that they used a set of data to tweak the neurally computed function. "learned" as opposed to hard-coded.

          I see how you got there, but the problem is semantic overload in the context of domain-specific jargon.

      • (Score: 0) by Anonymous Coward on Friday January 13 2017, @11:41PM

        by Anonymous Coward on Friday January 13 2017, @11:41PM (#453584)

        thats certainly moer useful and appealing.

  • (Score: 0) by Anonymous Coward on Friday January 13 2017, @02:10PM

    by Anonymous Coward on Friday January 13 2017, @02:10PM (#453299)

    Is there anything at which AI's won't soon be able to beat humans?

    Understanding humans.

    • (Score: 0) by Anonymous Coward on Friday January 13 2017, @02:30PM

      by Anonymous Coward on Friday January 13 2017, @02:30PM (#453309)

      Humans don't understand humans. Some humans can't resist the urge to kill all humans.

      • (Score: 2) by vux984 on Friday January 13 2017, @04:00PM

        by vux984 (5045) on Friday January 13 2017, @04:00PM (#453350)

        Some humans can't resist the urge to kill all humans.

        Maybe they figured humans out. 8D