DeepMind's AI agents conquer human pros at Starcraft II
AI agents developed by Google's DeepMind subsidiary have beaten human pros at Starcraft II — a first in the world of artificial intelligence. In a series of matches streamed on YouTube and Twitch, AI players beat the humans 10 games in a row. In the final match, pro player Grzegorz "MaNa" Komincz was able to snatch a single victory for humanity.
[...] Beating humans at video games might seem like a sideshow in AI development, but it's a significant research challenge. Games like Starcraft II are harder for computers to play than board games like chess or Go. In video games, AI agents can't watch the movement of every piece to calculate their next move, and they have to react in real time.
These factors didn't seem like much of an impediment to DeepMind's AI system, dubbed AlphaStar. First, it beat pro player Dario "TLO" Wünsch, before moving to take on MaNa. The games were originally played in December last year at DeepMind's London HQ, but a final match against MaNa was streamed live today, providing humans with their single victory.
Professional Starcraft commentators described AlphaStar's play as "phenomenal" and "superhuman." In Starcraft II, players start on different sides of the same map before building up a base, training an army, and invading the enemy's territory. AlphaStar was particularly good at what's called "micro," short for micromanagement, referring to the ability to control troops quickly and decisively on the battlefield.
[...] Experts have already begun to dissect the games and argue over whether AlphaStar had any unfair advantages. The AI agent was hobbled in some ways. For example, it was restricted from performing more clicks per minute than a human. But unlike human players, it was able to view the whole map at once, rather than navigating it manually.
Previously: Google DeepMind to Take on Starcraft II
Google's AI Declares Galactic War on Starcraft
Related: DeepMind's AI Agents Exceed Human-Level Gameplay in Quake III
Move Over AlphaGo: AlphaZero Taught Itself to Play Three Different Games
Related Stories
Google's DeepMind division will attempt to make an AI that can play Starcraft II in real time without using the same unfair knowledge and capabilities (such as controlling units that are "off-screen") that Blizzard's own AI use. Blizzard and DeepMind are working on a build of the game that will be "open and available to all researchers next year".
Reported at The Washington Post and The Verge .
Tic-tac-toe, checkers, chess, Go, poker. Artificial intelligence rolled over each of these games like a relentless tide. Now Google's DeepMind is taking on the multiplayer space-war videogame StarCraft II. No one expects the robot to win anytime soon. But when it does, it will be a far greater achievement than DeepMind's conquest of Go—and not just because StarCraft is a professional e-sport watched by fans for millions of hours each month.
DeepMind and Blizzard Entertainment, the company behind StarCraft, just released the tools to let AI researchers create bots capable of competing in a galactic war against humans. The bots will see and do all all the things human players can do, and nothing more. They will not enjoy an unfair advantage.
DeepMind and Blizzard also are opening a cache of data from 65,000 past StarCraft II games that will likely be vital to the development of these bots, and say the trove will grow by around half a million games each month. DeepMind applied machine-learning techniques to Go matchups to develop its champion-beating Go bot, AlphaGo. A new DeepMind paper includes early results from feeding StarCraft data to its learning software, and shows it is a long way from mastering the game. And Google is not the only big company getting more serious about StarCraft. Late Monday, Facebook released its own collection of data from 65,000 human-on-human games of the original StarCraft to help bot builders.
[...] Beating StarCraft will require numerous breakthroughs. And simply pointing current machine-learning algorithms at the new tranches of past games to copy humans won't be enough. Computers will need to develop styles of play tuned to their own strengths, for example in multi-tasking, says Martin Rooijackers, creator of leading automated StarCraft player LetaBot. "The way that a bot plays StarCraft is different from how a human plays it," he says. After all, the Wright brothers didn't get machines to fly by copying birds.
Churchill guesses it will be five years before a StarCraft bot can beat a human. He also notes that many experts predicted a similar timeframe for Go—right before AlphaGo burst onto the scene.
Have any Soylentils here experimented with Deep Learning algorithms in a game context? If so how did it go and how did it compare to more traditional opponent strategies?
Source: https://www.wired.com/story/googles-ai-declares-galactic-war-on-starcraft-/
Submitted via IRC for BoyceMagooglyMonkey
AI agents continue to rack up wins in the video game world. Last week, OpenAI's bots were playing Dota 2; this week, it's Quake III, with a team of researchers from Google's DeepMind subsidiary successfully training agents that can beat humans at a game of capture the flag.
As we've seen with previous examples of AI playing video games, the challenge here is training an agent that can navigate a complex 3D environment with imperfect information. DeepMind's researchers used a method of AI training that's also becoming standard: reinforcement learning, which is basically training by trial and error at a huge scale.
Agents are given no instructions on how to play the game, but simply compete against themselves until they work out the strategies needed to win. Usually this means one version of the AI agent playing against an identical clone. DeepMind gave extra depth to this formula by training a whole cohort of 30 agents to introduce a "diversity" of play styles. How many games does it take to train an AI this way? Nearly half a million, each lasting five minutes.
Source: https://www.theverge.com/2018/7/4/17533898/deepmind-ai-agent-video-game-quake-iii-capture-the-flag
Move over AlphaGo: AlphaZero taught itself to play three different games
Google's DeepMind—the group that brought you the champion game-playing AIs AlphaGo and AlphaGoZero—is back with a new, improved, and more-generalized version. Dubbed AlphaZero, this program taught itself to play three different board games (chess, Go, and shogi, a Japanese form of chess) in just three days, with no human intervention.
A paper describing the achievement was just published in Science. "Starting from totally random play, AlphaZero gradually learns what good play looks like and forms its own evaluations about the game," said Demis Hassabis, CEO and co-founder of DeepMind. "In that sense, it is free from the constraints of the way humans think about the game."
[...] As [chess grand master Garry] Kasparov points out in an accompanying editorial in Science, these days your average smartphone chess playing app is far more powerful than Deep Blue. So AI researchers turned their attention in recent years to creating programs that can master the game of Go, a hugely popular board game in East Asia that dates back more than 2,500 years. It's a surprisingly complicated game, much more difficult than chess, despite only involving two players with a fairly simple set of ground rules. That makes it an ideal testing ground for AI.
(Score: 1, Insightful) by Anonymous Coward on Saturday January 26 2019, @03:50AM (3 children)
To be clear, I think that a computer winning at Go is much more significant and impressive than winning at Starcraft 2, and that push-come-to-shove that a computer will easily be able to beat a human (using APM if nothing else). That being said, from the article:
So yeah... give a computer unfair advantages, and it wins. Shocking. It's just like how on Jeopardy, the precision perfect timer allowed the computer to win the buzz each time and win the game.
(Score: 2) by takyon on Saturday January 26 2019, @05:05AM
AlphaDeepWhatever's limitations may be coming into focus. This map thing - if we were to restrict it to using a the normal view of the map (at a screen resolution allowed by the game) as well as the mini-map, and force it to scroll or click as if it were using a mouse, then the problem space could become a lot larger. Suddenly it needs to kind of remember what's going on, and it becomes less of an image recognition problem. It has to react to sound alerts or enemies appearing on the mini-map.
I don't think it's impossible to make it do everything like a human short of physically staring at a screen, tapping on a keyboard, and moving a mouse (which could be done with a robot if we want to go there). But it might (un)expectedly make the problem orders of magnitude harder.
Advances in transistors or 3D architecture could make the hardware thousands or millions of times faster, allowing the problems to be brute forced like before. Or we could see Google move towards neuromorphic and "strong AI", i.e. a machine with apparent sapience. The latter would be developed in secret. And there would be a lot more interesting applications than "Starcraft 2 playing slave".
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Saturday January 26 2019, @03:39PM
"So yeah... give a computer unfair advantages, and it wins "
This is the wrong way to think about things. Let me illustrate with another example.
Give a tractor an unfair advantage and it wins the digging contest.
(Score: 3, Interesting) by RandomFactor on Saturday January 26 2019, @04:46PM
Sure but that generally holds for humans also :-)
If you are in a competition of any sort, then the same constraints should apply to all. However, for real world applications, we certainly want AI to have all those cool advantages.
Put an AI in command of, say, ocean going warships [npr.org] and I want them to be able to pull data from every radar, lidar, microwave, sonar, camera, magnetic, pressure, recon by fire, subspace, and whatever-the-heck-else they have simultaneously and avoid issues like no human ever could (particularly at the dreary end of an extremely long day [soylentnews.org])
В «Правде» нет известий, в «Известиях» нет правды
(Score: 0) by Anonymous Coward on Sunday January 27 2019, @10:40AM (2 children)
I've been beaten by the built-in AI of Starcraft several times. Looks like Google is doing slow progress.
(Score: 3, Funny) by takyon on Sunday January 27 2019, @08:08PM (1 child)
I assume that AI has even more unfair advantages than AlphaCraft.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Sunday January 27 2019, @08:59PM
That's true. I still can't figure out how to beat the Human AI of Warcraft II in map 7; The Fall of Stromgarde.
(Score: 2) by takyon on Sunday January 27 2019, @09:04PM
The AOE2 community is bigger today due to the new expansions, but the game could seriously use some bug and performance fixes. They could use the game to conduct their own PR-friendly AI research [microsoft.com].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]