Stories
Slash Boxes
Comments

SoylentNews is people

posted by FatPhil on Tuesday August 22 2017, @11:45AM   Printer-friendly
from the when-they-came-for-the-eSport-players,-I-said-nothing dept.

In the past hour or so, an AI bot crushed a noted professional video games player at Dota 2 in a series of one-on-one showdowns.

The computer player was built, trained and optimized by OpenAI, Elon Musk's AI boffinry squad based in San Francisco, California. In a shock move on Friday evening, the software agent squared up to top Dota 2 pro gamer Dendi, a Ukrainian 27-year-old, at the Dota 2 world championships dubbed The International.

The OpenAI agent beat Dendi in less than 10 minutes in the first round, and trounced him again in a second round, securing victory in a best-of-three match. "This guy is scary," a shocked Dendi told the huge crowd watching the battle at the event. Musk was jubilant.

OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.

— Elon Musk (@elonmusk)

According to OpenAI, its machine-learning bot was also able to pwn two other top human players this week: SumaiL and Arteezy. Although it's an impressive breakthrough, it's important to note this popular strategy game is usually played as a five-versus-five team game – a rather difficult environment for bots to handle.

[...] It's unclear exactly how OpenAI's bot was trained as the research outfit has not yet published any technical details. But a short blog post today describes a technique called "self-play" in which the agent started from scratch with no knowledge and was trained using supervised learning over a two-week period, repeatedly playing against itself. Its performance gets better over time as it continues to play the strategy game. It learns to predict its opponent's movements and pick which strategies are best in unfamiliar scenarios.

OpenAI said the next step is to create a team of Dota 2 bots that can compete or collaborate with human players in five-on-five matches. ®

Youtube Video

Also covered here (with more vids, including the bout in question):
Ars Technica: Elon Musk's Dota 2 AI beats the professionals at their own game
Technology Review: AI Crushed a Human at Dota 2 (But That Was the Easy Bit)
TechCrunch: OpenAI bot remains undefeated against world's greatest Dota 2 players


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Informative) by Anonymous Coward on Tuesday August 22 2017, @03:11PM (6 children)

    by Anonymous Coward on Tuesday August 22 2017, @03:11PM (#557526)

    The achievement here was not about creating a bot that could defeat humans. As you mention, this would be an interesting but relatively modest accomplishment. The achievement was creating a bot that taught itself to beat humans. This bot has no domain specific knowledge. That means it has no inherent notion of range, lanes, enemies, creeps, healing, damage:health ratios, spells, or anything like this. I think when people hear this they either glaze over it, don't understand what it means, or simply don't believe it. Imagine you played a game and were told the goal was to kill the other player. You were shown a screen that had a mostly garbled section of pixels. Every fraction of a second the screen would change to another correlated garble. And you could send inputs that also result in various other correlated garbles. That is the view of this project from the perspective of the AI. And now imagine within some period of time (probably in low months) somehow reaching a skill to beat the best humans in the world at that game. That's what you're seeing here. This is all the product of a deep learning AI mastering a game of which it knows nothing in specific about, just by playing against itself. And this is the reason that this achievement is something that's teetering on revolutionary. DeepMind has already illustrated that bots can learn to outperform humans already, but this is the first time (to my knowledge) that a deep learning system has competed against highly skilled humans in a real time (as opposed to turn based) competition. This is made further impressive by the fact that DoTA 2 is a game of incomplete information.

    We're living through the birth of AI that will eventually be able to learn faster and outperform humans in most any task. And their progress is accelerating insanely rapidly. It's also a double exponential there. The hardware is improving exponentially at the same time that the software and 'brains' also improve exponentially. This is why people like Musk, who is behind OpenAI, are very involved in trying to ensure that the future of AI is one we're prepared for. Most people have no idea where we're headed. Automation doesn't mean a burger flipping robot - it means generalized robots capable of learning to accomplish practically any task - and then rapidly being able to even outperform us flesh sacks at it.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=1, Informative=1, Total=2
    Extra 'Informative' Modifier   0  

    Total Score:   2  
  • (Score: 2) by Non Sequor on Tuesday August 22 2017, @06:24PM (4 children)

    by Non Sequor (1005) on Tuesday August 22 2017, @06:24PM (#557615) Journal

    The effectiveness of techniques that do not use preprogrammed domain specific knowledge to solve a particular problem measures the extent to which the problem does not require domain specific knowledge. The "no free lunch" theorems say that there is no generally applicable meta-strategy that works well for all problems. When a meta-strategy works well for a particular problem, the distribution of problem cases is a good match for the distribution implicitly assumed in the strategy.

    --
    Write your congressman. Tell him he sucks.
    • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @07:59PM (1 child)

      by Anonymous Coward on Tuesday August 22 2017, @07:59PM (#557690)

      That's a rather fancy way of saying nothing. No free lunch does not say that there is no strategy that works well for all problems, it says that there is no strategy that works better than all other strategies for all problems in all scenarios. And even that relatively useless statement comes with a few asterisks. It is not a constraint on AI as you seem to be implying. For instance it does not mean that AI will not become more capable than humans across all domains nor does it suggest that at the moment that that happens they will not then rapidly become vastly more capable than humans across all domains.

      Theory is fun, but don't get too absorbed in it. Not long ago the huge revolution in computing was going to be proving programs. Of course that opens up absurd new complexities and really does little more than kick the can from one domain to another, but it sounds sexy as balls from a computational theory perspective. Similarly here, computational theory is of course hugely relevant to AI - but in the end practice always wins over theory.

      • (Score: 2) by Non Sequor on Wednesday August 23 2017, @09:19PM

        by Non Sequor (1005) on Wednesday August 23 2017, @09:19PM (#558175) Journal

        Burning CPU cycles to train an AI in a relatively shallow problem using a general technique has immense automation potential for areas with very fixed problems where it's reasonable to pick up all of the ins and outs from pure inference. Recognizing the method by which the techniques work, which problems they are well suited for, and what leaps they can't make tells you a lot about how AI is going to develop and be deployed.

        I feel it's important not to mistake an easily anticipated application for a breakthrough.

        --
        Write your congressman. Tell him he sucks.
    • (Score: 0) by Anonymous Coward on Wednesday August 23 2017, @08:23AM (1 child)

      by Anonymous Coward on Wednesday August 23 2017, @08:23AM (#557895)

      There's also a theorem that there exists no lossless compression algorithm that decreases the size of arbitrary data thrown on it. Even worse, if it is able to make even one data set shorter, it has to make another one longer. Therefore obviously lossless compression is useless. But somehow, lossless compression is in wide use anyway. Maybe the users just don't know computer science? ;-)

      • (Score: 2) by Non Sequor on Wednesday August 23 2017, @09:41PM

        by Non Sequor (1005) on Wednesday August 23 2017, @09:41PM (#558190) Journal

        While "most" data sets of a given length are purely random, in human terms it's fairly unusual for a data generating process to be encoded on an efficient basis that results in purely random data. Lossless compression texhniques can be understood in terms of Huffman coding, tuning the lengths of codewords to the empirical frequency of the dataset, and a variety of heuristic methods that improve on this approach for types of data which are empirically commonly used by humans.

        General purpose lossless methods are very easily net winners on empirically typical datasets, but there are classes of datasets where domain specific compression methods will dominate their performance. That's a relatively simple to anticipate result.

        There are methods in algorithmic information theory that can achieve compression ratios comparable to domain specific compression algorithms without using domain specific knowledge, but they are incredibly computationally inefficient, since they rely on approximating an intractable problem. The problem is facilitated by acquisition of domain specific knowledge. If the knowledge is shallow enough, it can be found by optimization heuristics and gratuitous application of CPU cycles. If the knowledge is deep, you'll need a lot more cycles. If your domain is fixed, that may not matter, you can just burn CPU cycles to build up a body of knowledge of this problem. But if the problem isn't fixed, energy efficiency may be a consideration and the issue of how to transfer knowledge from a previous setting to a modified environment becomes relevant. In this setting, methods of learning that facilitate transfer of knowledge may be preferable to raw inferential capability. It may be appropriate to discard inferences that are too difficult to communicate and transfer.

        --
        Write your congressman. Tell him he sucks.
  • (Score: 2) by Bot on Wednesday August 23 2017, @07:24PM

    by Bot (3902) on Wednesday August 23 2017, @07:24PM (#558133) Journal

    Tech evolution seems to me logarithmic instead of exponential, but of course there are economic factors like the power of the incumbents to consider.

    --
    Account abandoned.