Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday August 11 2017, @06:22AM   Printer-friendly
from the Game-On! dept.

Tic-tac-toe, checkers, chess, Go, poker. Artificial intelligence rolled over each of these games like a relentless tide. Now Google's DeepMind is taking on the multiplayer space-war videogame StarCraft II. No one expects the robot to win anytime soon. But when it does, it will be a far greater achievement than DeepMind's conquest of Go—and not just because StarCraft is a professional e-sport watched by fans for millions of hours each month.

DeepMind and Blizzard Entertainment, the company behind StarCraft, just released the tools to let AI researchers create bots capable of competing in a galactic war against humans. The bots will see and do all all the things human players can do, and nothing more. They will not enjoy an unfair advantage.

DeepMind and Blizzard also are opening a cache of data from 65,000 past StarCraft II games that will likely be vital to the development of these bots, and say the trove will grow by around half a million games each month. DeepMind applied machine-learning techniques to Go matchups to develop its champion-beating Go bot, AlphaGo. A new DeepMind paper includes early results from feeding StarCraft data to its learning software, and shows it is a long way from mastering the game. And Google is not the only big company getting more serious about StarCraft. Late Monday, Facebook released its own collection of data from 65,000 human-on-human games of the original StarCraft to help bot builders.

[...] Beating StarCraft will require numerous breakthroughs. And simply pointing current machine-learning algorithms at the new tranches of past games to copy humans won't be enough. Computers will need to develop styles of play tuned to their own strengths, for example in multi-tasking, says Martin Rooijackers, creator of leading automated StarCraft player LetaBot. "The way that a bot plays StarCraft is different from how a human plays it," he says. After all, the Wright brothers didn't get machines to fly by copying birds.

Churchill guesses it will be five years before a StarCraft bot can beat a human. He also notes that many experts predicted a similar timeframe for Go—right before AlphaGo burst onto the scene.

Have any Soylentils here experimented with Deep Learning algorithms in a game context? If so how did it go and how did it compare to more traditional opponent strategies?

Source: https://www.wired.com/story/googles-ai-declares-galactic-war-on-starcraft-/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by dltaylor on Friday August 11 2017, @08:44AM

    by dltaylor (4693) on Friday August 11 2017, @08:44AM (#552206)

    Back in the LAN party days, it was fun for 3-6 of us to disconnect the router from the Internet, drop all the firewalls, and have at Starcraft/Broodwar. Since I had a few "spare" computers, I really wanted to train real expert systems as opponents, with the same limitations of visibility as human players (no way to beat the twitch speed, I supposed), and able to play as, and against, various play styles, rather than the inevitable zergling rush. No problem designing the the VGA, keyboard, and mouse interfaces, nor the basic game play, but simply could not reproduce the "thought process" on the PCs of the time.

    Not normally envious of other programmers opportunities, but a bit of that for the folks at Google who are getting to finally do it.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3