Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday November 21 2019, @06:22PM   Printer-friendly
from the you-think-this-is-a-game? dept.

In a new attempt to dethrone humans in game-play, DeepMind takes on Starcraft II. The results are published in Nature.

DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.

Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

Still, the AI was capable of achieving grandmaster level, the highest possible online competitive ranking, and marks the first ever system to do so in StarCraft II. DeepMind sees the advancement as more proof that general-purpose reinforcement learning, which is the machine learning technique underpinning the training of AlphaStar, may one day be used to train self-learning robots, self-driving cars, and create more advanced image and object recognition systems.

“The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge,” said David Silver, a DeepMind principle research scientist on the AlphaStar team, in a statement. “The game’s complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 10^26 possible choices for every move; and players have less information about their opponents than in poker.”

Back in January, DeepMind announced that its AlphaStar system was able to best top pro players 10 matches in a row during a prerecorded session, but it lost to pro player Grzegorz “MaNa” Komincz in a final match streamed live online. The company kept improving the system between January and June, when it said it would start accepting invites to play the best human players from around the world. The ensuing matches took place in July and August, DeepMind says.

The results were stunning: AlphaStar had become among the most sophisticated Starcraft II players on the planet, but remarkably still not quite superhuman. There are roughly 0.2 percent of players capable of defeating it, but it is largely considered only a matter of time before the system improves enough to crush any human opponent.

Watch the video: https://www.nature.com/articles/d41586-019-03343-4


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by tekk on Friday November 22 2019, @04:20AM (2 children)

    by tekk (5704) Subscriber Badge on Friday November 22 2019, @04:20AM (#923305)

    A huge problem with the reporting of this stuff is that they have subtle cheats. OpenAI and DotA 2 was a big one. For example from the articles I've seen they used the bot API, so they always knew with 100% accuracy if a spell would kill an enemy or if they were in range, etc. They were also controlling all 5 heroes (dota is a 5v5 game) with one single AI, meaning that communication was 100% perfect within the team. In this case I'll bet the bot was able to basically issue multiple commands at once and I'll guarantee it could pay full attention to absolutely everywhere it had vision, if it didn't have perfect knowledge to begin with, since that's a classic RTS AI cheat.

    Another fun one is that bots just do weird stuff. They make moves that make absolutely no sense which makes them impossible to predict. Kinda like that old anecdote where master swordsmen aren't afraid of dueling master swordsmen, they're afraid of fighting idiots who have no idea how to even hold a sword because THOSE guys are going to get them both killed.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by richtopia on Friday November 22 2019, @04:23PM

    by richtopia (3160) on Friday November 22 2019, @04:23PM (#923432) Homepage Journal

    This article didn't talk about cheating, but previously I saw that this effort took efforts to prevent cheating. There is a specific APM the AI is not allowed to go above, and vision needs to be as realistic as possible (to the point of moving the FOV).

    The video discussed the training dataset, and how the AI struggled upon a new set of competitors. When AI was released in the wild, human creativity racked up wins while the AI learned. More recently they have been building more specific competitor AIs: An AI with a strategy that isn't robust in general competiton, but can exploit specific weaknesses in the current AI's strategy.

    As the interview points out, the aim here is not to win at Starcraft, but it is to provide a difficult deep learning test case. If the AI has an unfair advantage it can expoloit, you are defeating the experiment. If anything, I suspect the researchers may be giving the AI a disadvantage in some areas like APM to really stress the learning aspect of the AI.

  • (Score: 0) by Anonymous Coward on Friday November 22 2019, @06:20PM

    by Anonymous Coward on Friday November 22 2019, @06:20PM (#923468)

    What you describe isn't "cheating" as such, any more so than in a race, a person with legs sprinting faster than a person in a wheelchair. The bot does have superhuman abilities in vision, situational awareness, and what have you (and subhuman abilities in others).

    If they had map-hacks on or had "vision" on more terrain than would be visible on the standard viewscreen and minimap, that would be cheating. However, merely be able able to notice when a base is being approached by a swarm of red units on the minimap every time instantly is not cheating.

    That being said, computers do have super human abilities with click precision, click rate, visual acuity, and what have you. This is why I've never thought it special that an AI could beat a human. The only reason it wouldn't is due to a lack of funding and R&D. I'd be more surprised if, after throwing $1Billion USD at the problem and removing artificial handicaps, if a human *could* consistently beat an AI.