Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by martyb on Monday November 07 2016, @01:57AM   Printer-friendly
from the Destroy-all-Human-[Player]s! dept.

Google's DeepMind division will attempt to make an AI that can play Starcraft II in real time without using the same unfair knowledge and capabilities (such as controlling units that are "off-screen") that Blizzard's own AI use. Blizzard and DeepMind are working on a build of the game that will be "open and available to all researchers next year".

Reported at The Washington Post and The Verge .


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by julian on Monday November 07 2016, @02:53AM

    by julian (6003) Subscriber Badge on Monday November 07 2016, @02:53AM (#423370)

    It'll be interesting to see if they can get it to scale to more complex games. Atari games have at most one button for input and the joystick, and movement is sometimes not even 2-dimensional but merely linear.

    My limited understanding with how they've gotten it to work so far is that the AI knows what's on the screen, knows the score (or the win condition), and has access to the game's usual controller inputs. So something happens on the screen and the AI reacts randomly at first. Some reactions correlate with increasing the score (or winning) and those are "learned" to be appropriate moves to make when the screen is in a certain state. If you do this enough the obviously bad moves get weeded out and the good ones become increasingly refined until perfect play is achieved.

    This works well for Atari games because there's few choices to make. In Pong you can either make the paddle go up or go down, or stay in its current location. If the ball is coming at you high the only good move is to move the paddle upward. Bad moves are to move it downward, or not move the paddle. But in Starcraft or any other modern computer game there could be, at any moment, thousands of choices you could make which branch to thousands more at every node. The search space is vast--probably vaster than even Go and has to be done in real time. Looking at the video demonstrating the API [youtube.com], it appears they've graphically reduced what the AI "sees" to be very similar to an Atari game.

    If any group will be able to do it, it'll be Google and DeepMind. It's definitely something to watch. Better AI in games is just a toy example of what this is good for. There's no reason the input couldn't come from cameras and sensors in the real world instead of a virtual one. It might enable real-world tasks to be treated like games and learned the same way it learned Starcraft.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2