Slash Boxes

SoylentNews is people

posted by martyb on Tuesday October 12, @04:14PM   Printer-friendly [Skip to comment(s)]

These Virtual Obstacle Courses Help Real Robots Learn to Walk:

The virtual robot army was developed by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They used the wandering bots to train an algorithm that was then used to control the legs of a real-world robot.

In the simulation, the machines—called ANYmals—confront challenges like slopes, steps, and steep drops in a virtual landscape. Each time a robot learned to navigate a challenge, the researchers presented a harder one, nudging the control algorithm to be more sophisticated.

From a distance, the resulting scenes resemble an army of ants wriggling across a large area. During training, the robots were able to master walking up and down stairs easily enough; more complex obstacles took longer. Tackling slopes proved particularly difficult, although some of the virtual robots learned how to slide down them.

When the resulting algorithm was transferred to a real version of ANYmal, a four-legged robot roughly the size of a large dog with sensors on its head and a detachable robot arm, it was able to navigate stairs and blocks but suffered problems at higher speeds. Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

Similar kinds of robot learning could help machines learn all sorts of useful things, from sorting packages to sewing clothes and harvesting crops. The project also reflects the importance of simulation and custom computer chips for future progress in applied artificial intelligence.

"At a high level, very fast simulation is a really great thing to have," says Pieter Abbeel, a professor at UC Berkeley and cofounder of Covariant, a company that is using AI and simulations to train robot arms to pick and sort objects for logistics firms. He says the Swiss and Nvidia researchers "got some nice speed-ups."

A 2m21s video is available on YouTube.

See also: Robots can now skateboard, thanks to researchers from Caltech

A research team at The California Institute of Technology has built a robot with hybrid walking and flying movement. The robot can carry out manoeuvres such as flying to avoid stairs and skateboarding.

Original Submission

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by JoeMerchant on Tuesday October 12, @05:34PM (7 children)

    by JoeMerchant (3937) on Tuesday October 12, @05:34PM (#1186466)

    Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

    Estimating pose (orientation, velocity, joint position, etc.) is a notoriously difficult thing to do.

    I'd guess they would have better luck training the virtual bots on "noised up" pose data rather than trying to improve the real world pose estimation performance.

    John Galt is a selfish crybaby [].
    • (Score: 3, Interesting) by FatPhil on Wednesday October 13, @04:38AM (2 children)

      Indeed. But are you sure they're actually training bots with this system anyway? The youtube vid looked totally fake. No matter what terrain the vitubots were on, they always moved at a constant planar speed, despite being articulated in a way that is differently suited to upward-sloping and downward-sloping hazards. Check the "race" at the end - those bots are always in lock step despite being on completely different terrain. Also note that none of the swarm of bots stumble, despite the fact that the hard robots do seem capable of recovering fro. How can they recover from underfoot support giving way when they've never been trained on? An AI would be unlikely to evolve something so clever with no incentive at all, that's not how annealing and optimisation work - it's not optimisation if you develop features that you were never asked for. (And spandrels won't be significantly more complex than the feature that was intended.)

      And if you were using simulation to evolve the AI, you'd evolve the mechanical aspects too - perhaps longer/shorter legs, or a wider stance or rake would be better - surely? All the virtubots were mechanically indistinguishable.
      I know I'm God, because every time I pray to him, I find I'm talking to myself.
      • (Score: 3, Interesting) by JoeMerchant on Wednesday October 13, @12:45PM (1 child)

        by JoeMerchant (3937) on Wednesday October 13, @12:45PM (#1186627)

        If this is like some of the stuff I've done (for real done) - the simulation work is there, the real-life robots are using the results of simulation training, but the graphic rendering isn't really part of the project - except for press releases like this.

        Actually, I had a "leg up" since I was using Flight Gear as my virtual environment - so it already had a fancy rendering engine to help snag the next-round funding after I demoed my software autopilot flying Cessnas through waypoint routes in it through simulated stormy weather. Without that rendering, it's questionable whether or not we would have gotten the funding to make the hardware for a real-life flying autopilot.

        Or, they could be at a much earlier vaporware stage, hoping to use some lame graphics to fund the development of the first round software sims...

        John Galt is a selfish crybaby [].
    • (Score: 2) by Frosty Piss on Wednesday October 13, @05:26AM

      by Frosty Piss (4971) on Wednesday October 13, @05:26AM (#1186592)

      I'd guess they would have better luck training the virtual bots on "noised up" …

      That’s the problem, they’ve been using “downward dog”.

    • (Score: 2) by Immerman on Wednesday October 13, @06:14PM (2 children)

      by Immerman (3985) on Wednesday October 13, @06:14PM (#1186722)

      >Estimating pose (orientation, velocity, joint position, etc.) is a notoriously difficult thing to do.

      Really? As I recall from my college robotics days, at least joint position and velocity are fairly trivial to monitor if they matter. At the conceptually simplest you install a positional indicator sticker on the moving part of the joint (something like a ring-shaped bar- or QR-code), and on the stationary part you mount a tracking camera. Sub-degree positional accuracy is then trivial to extract with image analysis. And of course there's several far simpler and cheaper options for real-world applications. Some based on similar principals, some completely unrelated - for example: if you're using cables/hydraulics/pneumatics/etc. rather than in-joint electric motors then you can use a small magnet and a cheap off-the-shelf magnetic compass sensor. With enough cleverness, or a big enough magnet, you can even use it alongside an electric motor.

      • (Score: 2) by JoeMerchant on Wednesday October 13, @07:03PM (1 child)

        by JoeMerchant (3937) on Wednesday October 13, @07:03PM (#1186735)

        The joints aren't as bad as the free body estimation taken from accelerometer / gyro / sometimes magnetometer data. They are, however, yet another thing you have to integrate into the system, and on an obstacle course like this you also have to estimate where you are relative to the edges / corners as well as the angles of slopes, coefficients of friction thereon, everything is an approximation and when you throw all those approximations into an input matrix, the control algorithm can come out with something a bit different than the rather exact measurements that are the default in simulated environments.

        The biggest discontinuity in development for the UAV autopilot was moving from the FlightGear provided pose information to the crappy data that came from our "affordable" sensor suite. The pressure altimeter was the most vexing, but all of them were just a bit off, leading to a pose estimation that was also a bit off, then there were the real-world servos controlling real world aero surfaces that have real world flex in them - the closed loop control sorted it all out, but it wasn't as crisp at hitting the corners as the simulator was.

        John Galt is a selfish crybaby [].
        • (Score: 2) by Immerman on Wednesday October 13, @08:27PM

          by Immerman (3985) on Wednesday October 13, @08:27PM (#1186760)

          Oh yeah, lots of other areas for free-body sensor inaccuracies for sure.

          It seems to me that any sort of half-decent virtual training (or testing) system for real-world robot control systems should absolutely add realistic levels of characteristic sensor noise and bias to all inputs in order to train resilience into the system from the ground up. Not to mention "physics noise" (hardness, friction, etc) to approximate the "messiness" of the real world. Maybe even worse-than-realistic noise to (hopefully) improve resiliency. That should be relatively easy to add to a simulation with negligible performance impact.

          Probably one of the most processing-intensive discrepancies to capture effectively would be any sort of vision system, which I suspect from the phrasing is the problem here. Hopefully they're at least including lens distortion on the virtual inputs, that's relatively easy. But on a moving platform the camera/sonar/lidar/etc. images will also suffer from image distortion due to the camera capturing pixels sequentially, and thus being in a slightly different position for each one. Simulating that distortion would require enormous rendering time - either for rendering the world in multiple subframes for an approximation, or to perform per-pixel ray-tracing.

          Of course there's also the visual messiness of the real world to deal with - which from their example it looks like they don't even attempt to approximate, and that may be most of the problem. But I've heard of several virtual training systems that use modern 3D game engines for rendering for exactly that reason. Probably not *nearly* as fast as these folks', but I have my doubts as to just how valuable "very fast simulation" of a world that looks nothing like reality really is. What good is a well-trained AI that can easily handle clean, orderly, by-the-book environments, but not the many messy edge cases that fill the real world?

          Maybe you could use it for pre-training before advancing to a more realistic "finishing" simulation, and eventually the real world, but it seems to me there's a real risk that the pre-training would hit a local maximum that would be extremely difficult to train into something more generally capable.

  • (Score: 2) by Immerman on Wednesday October 13, @05:58PM

    by Immerman (3985) on Wednesday October 13, @05:58PM (#1186719)

    >Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

    I think they've got that backwards. Decent researchers would have blamed inaccuracies in their simulated sensors compared to the real world operation. The real-world behavior is always 100% accurate. If they failed to consider the reality of how their sensors worked before over-simplifying them in the simulation, that's 100% on them.