Submitted via IRC for BoyceMagooglyMonkey
AI agents continue to rack up wins in the video game world. Last week, OpenAI's bots were playing Dota 2; this week, it's Quake III, with a team of researchers from Google's DeepMind subsidiary successfully training agents that can beat humans at a game of capture the flag.
As we've seen with previous examples of AI playing video games, the challenge here is training an agent that can navigate a complex 3D environment with imperfect information. DeepMind's researchers used a method of AI training that's also becoming standard: reinforcement learning, which is basically training by trial and error at a huge scale.
Agents are given no instructions on how to play the game, but simply compete against themselves until they work out the strategies needed to win. Usually this means one version of the AI agent playing against an identical clone. DeepMind gave extra depth to this formula by training a whole cohort of 30 agents to introduce a "diversity" of play styles. How many games does it take to train an AI this way? Nearly half a million, each lasting five minutes.
Source: https://www.theverge.com/2018/7/4/17533898/deepmind-ai-agent-video-game-quake-iii-capture-the-flag
(Score: 2, Insightful) by Kalas on Tuesday July 10 2018, @09:52PM
Please consider that given a game using entirely AI players, there's no reason the devs can't just set the game to run 10 or 100 times faster than realtime.
Processing power is pretty much the only limitation for how fast these simulations can be ran. You could get all those thousands of 5 minute (subjective time) games done in weeks or maybe days given enough cores to throw at it.