Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Tuesday January 17 2017, @08:56PM   Printer-friendly
from the this-is-the-the-way-the-world-ends dept.

An integration of OpenAI's Universe AI platform with GTA 5 has been achieved and open sourced.

This video demonstrates the integration. The window at the top left is what the AI agent is seeing. The window at the bottom left outputs the agent's state. And the main window is just an eye candy rendering with a detached camera. A surprisingly competent sample agent trained using imitation learning over just 600,000 frames (about 21 hours) of the game's AI driving is available. Here is a first person view of the sample agent cruising around. Some major potential here and it's great to see open source software and AI meshing so well.

Videos created by DeepDrive.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by ledow on Tuesday January 17 2017, @11:25PM

    by ledow (5567) on Tuesday January 17 2017, @11:25PM (#455147) Homepage

    Is it just me that's not particularly impressed?

    It drives in massive jerks, mis-detects the edges and strays left all the time. It takes no evasive action for the car that overtakes it in the second video. It can't keep up with heuristically-programmed traffic at all, it drives over a crossing without stopping.

    There are literally no obstacles in its way, except for one traffic light when it stops a mile back, and then gets (honestly) a bit too close.

    To be honest, I don't see what's been achieved here over, say, even primitive edge-detection, finding a perspective-angled line by the side of the road and following, and judging a car in front by the obscured portion of the image to slow gently before you hit it.

    And that's working from an image that is basically computer-generated and simplified so the gamer knows exactly where the road lies.

    I have to say - as someone who has ALWAYS wanted a game where for some portion you have to drive sensibly to not get caught (ever since playing the original Driver where you had to not attract attention in some missions, but could still drive over the pavement, cut lanes, drive on the wrong side of the road, etc.) - this is feeling 20 years old and still not good enough.

    I get that's it's "trained" as such, but from the 8fps samples it's getting, I was really expecting something with 1/8th of a second reactions (vastly outweighing a human driver), but it's like watching your grandma drive in a broken car. And the stream is actually getting and reacting on seems incredibly low-res and poor quality for the job at hand, and the actions available to it incredibly simplistic and laggy.

    I honestly think you could run the PC version of Driver and do a better job interfacing with it, with cross-traffic and all-sorts.

    The best thing I've ever seen in this area - and I cannot find it anywhere any more - was someone's PhD project for genetic algorithms. It was an object with joints and "bones", in a simple 2d physics environment, laying on a surface of random elevation. It showed four candidate algorithms, one in each quarter, and each algorithm determined when to move a joint. In generation one, all it did was twitch its joints like a dying insect. By rewarding algorithms based on the distance covered, and "breeding" them with other successes, in a handful of generations, it was slinking across the floor. In a hundred generations, do-able on an early Pentium in a handful of minutes on something with a physics engine written in Java, it was sprinting over the floor (collapsing occasionally still). In hundreds of generations, it was performing the weirdest kind of movement that propelled it at stupendous speeds and was able to recover when it stumbled. Just by twitching a virtual muscle connecting two virtual bones.

    By comparison, the integration, the algorithm, the learning, the processing and the throughput of this just sucks.

    A Pentium was breeding algorithms, four or more at a time, executing physics, in four different sandbox worlds, in Java, hundreds of generations in minutes, and achieving more control and success over the output than this things could manage. And that must have been over 15 years ago, because I found that when I was in uni studying Computer Science, including AI.

    (P.S. If ANYONE can find that for me again, though I suspect it was a post-doc's work from a uni that disappeared from the net when they moved on, please do shout. The emphasis was on the word "gait" and generally breeding algorithms to make random agents with genetic selection criteria make a skeleton move).

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by ledow on Tuesday January 17 2017, @11:32PM

    by ledow (5567) on Tuesday January 17 2017, @11:32PM (#455153) Homepage

    Not dissimilar to:

    http://rednuht.org/genetic_walkers/ [rednuht.org]

    but that's not it.

  • (Score: 1) by charon on Wednesday January 18 2017, @03:20AM

    by charon (5660) on Wednesday January 18 2017, @03:20AM (#455222) Journal
    An algorithm to play QWOP [foddy.net]?
  • (Score: 0) by Anonymous Coward on Wednesday January 18 2017, @04:50AM

    by Anonymous Coward on Wednesday January 18 2017, @04:50AM (#455239)

    Their goal is not to program a vehicle to drive in GTA, but to develop general purpose algorithms that 'program themselves' colloquially speaking. The day this platform is capable of creating a compelling and viable driving AI based on 600,000 frames of driving is the day you could literally take video of you driving around in the real world in a variety of conditions and environments - apply the same learning system and have a self driving vehicle ready to go. Beyond that, the imitation learning agent was not meant to be 'good.' Even the training was completely minimalized in volume. It was specifically a starting off point for further research and development. The fact there's even a notion of driving on the road (which the AI has no hard-coded notion of) or not crashing into other things, again something the AI has no notion of is incredible.

    Or take the thing you mention about the AI slowly creeping up to the other vehicle when at a stop and then ultimately getting too close. Once again the AI has no hard coded means of estimating distance from another object or even whether that is an object that it should go around or wait behind. This is all 'emergent' from the training and that is all incredibly impressive.