Submitted via IRC for SoyCow9228
Video game characters could soon be truer to life thanks to AI that can teach itself to fight and flip.
Video game developers often turn to motion capture when they want realistic character animations. Mocap isn't very flexible, though, as it's hard to adapt a canned animation to different body shapes, unusual terrain or an interruption from another character. Researchers might have a better solution: teach the characters to fend for themselves. They've developed a deep learning engine (DeepMimic) that has characters learning to imitate reference mocap animations or even hand-animated keyframes, effectively training them to become virtual stunt actors. The AI promises realistic motion with the kind of flexibility that's difficult even with methods that blend scripted animations together.
Source: https://www.engadget.com/2018/04/11/ai-teaches-itself-stunts/
Soundless YouTube videos: 0, 1
(Score: 0) by Anonymous Coward on Saturday April 14 2018, @12:44PM (1 child)
If the videos are anything to go by, just throwing massive numbers of cubes at them as they go about their days will be entertaining enough.
(Score: 0) by Anonymous Coward on Saturday April 14 2018, @01:03PM
i could see VR moving into the direction of being a "driver" of one of these automaton.
with VR-user input in the "driver seat", the games aim is to train your agent to become a super-agent and
which, after completing some single-player levels (uniquely), you can drop into a multiplayer dead-match and
see how your agent performs against other?
for example: the VR-user prefers sniper rifles .. the automaton learns this and in dead-match will prefer to use sniper rifle -or-
VR-user likes to blow things up, thus the automaton in death-match will hunt/prefer grenades or rocket launchers?
also, maybe whilst progressing thru the single-player "learning" mission the automaton will predict more and more actions on its own,
taking the "training VR-user" for a ride :]