Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday October 12 2021, @04:14PM   Printer-friendly

These Virtual Obstacle Courses Help Real Robots Learn to Walk:

The virtual robot army was developed by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They used the wandering bots to train an algorithm that was then used to control the legs of a real-world robot.

In the simulation, the machines—called ANYmals—confront challenges like slopes, steps, and steep drops in a virtual landscape. Each time a robot learned to navigate a challenge, the researchers presented a harder one, nudging the control algorithm to be more sophisticated.

From a distance, the resulting scenes resemble an army of ants wriggling across a large area. During training, the robots were able to master walking up and down stairs easily enough; more complex obstacles took longer. Tackling slopes proved particularly difficult, although some of the virtual robots learned how to slide down them.

When the resulting algorithm was transferred to a real version of ANYmal, a four-legged robot roughly the size of a large dog with sensors on its head and a detachable robot arm, it was able to navigate stairs and blocks but suffered problems at higher speeds. Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

Similar kinds of robot learning could help machines learn all sorts of useful things, from sorting packages to sewing clothes and harvesting crops. The project also reflects the importance of simulation and custom computer chips for future progress in applied artificial intelligence.

"At a high level, very fast simulation is a really great thing to have," says Pieter Abbeel, a professor at UC Berkeley and cofounder of Covariant, a company that is using AI and simulations to train robot arms to pick and sort objects for logistics firms. He says the Swiss and Nvidia researchers "got some nice speed-ups."

A 2m21s video is available on YouTube.

See also: Robots can now skateboard, thanks to researchers from Caltech

A research team at The California Institute of Technology has built a robot with hybrid walking and flying movement. The robot can carry out manoeuvres such as flying to avoid stairs and skateboarding.


Original Submission

Related Stories

Watch Emu-Inspired Robot Legs That Use Less Energy to Run 11 comments

NewScientist covers highly efficient 2-actuator robotic legs. Apparently these are far more efficient than more complex devices.

"We're using just two actuators, one to move the leg back and forward, and one to lift it. Just the bare minimum required," says Badri-Spröwitz. "Usually in robotics, you're looking to improve efficiency by just 10 per cent or so, but we're seeing a 300 per cent increase."

The motors pull the tendons. Power is stored in a spring during compression and released when each foot strikes the floor, to help drive the robot forward.

Taking many actuators, sensors and electronics out of the system makes the robot lighter and cheaper to manufacture. It can also stand upright using no power.

See also the paywalled main article, DOI: 10.1126/scirobotics.abg4055

Previously:
(2021) These Virtual Obstacle Courses Help Real Robots Learn to Walk
(2018) Festo's New Bionic Robots Include Rolling Spider, Flying Fox
(2014) Tiny Walking Robots Powered by Muscle Cells


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by JoeMerchant on Tuesday October 12 2021, @05:34PM (7 children)

    by JoeMerchant (3937) on Tuesday October 12 2021, @05:34PM (#1186466)

    Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

    Estimating pose (orientation, velocity, joint position, etc.) is a notoriously difficult thing to do.

    I'd guess they would have better luck training the virtual bots on "noised up" pose data rather than trying to improve the real world pose estimation performance.

    --
    🌻🌻 [google.com]
    • (Score: 3, Interesting) by FatPhil on Wednesday October 13 2021, @04:38AM (2 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday October 13 2021, @04:38AM (#1186579) Homepage
      Indeed. But are you sure they're actually training bots with this system anyway? The youtube vid looked totally fake. No matter what terrain the vitubots were on, they always moved at a constant planar speed, despite being articulated in a way that is differently suited to upward-sloping and downward-sloping hazards. Check the "race" at the end - those bots are always in lock step despite being on completely different terrain. Also note that none of the swarm of bots stumble, despite the fact that the hard robots do seem capable of recovering fro. How can they recover from underfoot support giving way when they've never been trained on? An AI would be unlikely to evolve something so clever with no incentive at all, that's not how annealing and optimisation work - it's not optimisation if you develop features that you were never asked for. (And spandrels won't be significantly more complex than the feature that was intended.)

      And if you were using simulation to evolve the AI, you'd evolve the mechanical aspects too - perhaps longer/shorter legs, or a wider stance or rake would be better - surely? All the virtubots were mechanically indistinguishable.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 3, Interesting) by JoeMerchant on Wednesday October 13 2021, @12:45PM (1 child)

        by JoeMerchant (3937) on Wednesday October 13 2021, @12:45PM (#1186627)

        If this is like some of the stuff I've done (for real done) - the simulation work is there, the real-life robots are using the results of simulation training, but the graphic rendering isn't really part of the project - except for press releases like this.

        Actually, I had a "leg up" since I was using Flight Gear as my virtual environment - so it already had a fancy rendering engine to help snag the next-round funding after I demoed my software autopilot flying Cessnas through waypoint routes in it through simulated stormy weather. Without that rendering, it's questionable whether or not we would have gotten the funding to make the hardware for a real-life flying autopilot.

        Or, they could be at a much earlier vaporware stage, hoping to use some lame graphics to fund the development of the first round software sims...

        --
        🌻🌻 [google.com]
        • (Score: 2) by FatPhil on Thursday October 14 2021, @07:26AM

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Thursday October 14 2021, @07:26AM (#1186909) Homepage
          Yup, we're on the same page.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by Frosty Piss on Wednesday October 13 2021, @05:26AM

      by Frosty Piss (4971) on Wednesday October 13 2021, @05:26AM (#1186592)

      I'd guess they would have better luck training the virtual bots on "noised up" …

      That’s the problem, they’ve been using “downward dog”.

    • (Score: 2) by Immerman on Wednesday October 13 2021, @06:14PM (2 children)

      by Immerman (3985) on Wednesday October 13 2021, @06:14PM (#1186722)

      >Estimating pose (orientation, velocity, joint position, etc.) is a notoriously difficult thing to do.

      Really? As I recall from my college robotics days, at least joint position and velocity are fairly trivial to monitor if they matter. At the conceptually simplest you install a positional indicator sticker on the moving part of the joint (something like a ring-shaped bar- or QR-code), and on the stationary part you mount a tracking camera. Sub-degree positional accuracy is then trivial to extract with image analysis. And of course there's several far simpler and cheaper options for real-world applications. Some based on similar principals, some completely unrelated - for example: if you're using cables/hydraulics/pneumatics/etc. rather than in-joint electric motors then you can use a small magnet and a cheap off-the-shelf magnetic compass sensor. With enough cleverness, or a big enough magnet, you can even use it alongside an electric motor.

      • (Score: 2) by JoeMerchant on Wednesday October 13 2021, @07:03PM (1 child)

        by JoeMerchant (3937) on Wednesday October 13 2021, @07:03PM (#1186735)

        The joints aren't as bad as the free body estimation taken from accelerometer / gyro / sometimes magnetometer data. They are, however, yet another thing you have to integrate into the system, and on an obstacle course like this you also have to estimate where you are relative to the edges / corners as well as the angles of slopes, coefficients of friction thereon, everything is an approximation and when you throw all those approximations into an input matrix, the control algorithm can come out with something a bit different than the rather exact measurements that are the default in simulated environments.

        The biggest discontinuity in development for the UAV autopilot was moving from the FlightGear provided pose information to the crappy data that came from our "affordable" sensor suite. The pressure altimeter was the most vexing, but all of them were just a bit off, leading to a pose estimation that was also a bit off, then there were the real-world servos controlling real world aero surfaces that have real world flex in them - the closed loop control sorted it all out, but it wasn't as crisp at hitting the corners as the simulator was.

        --
        🌻🌻 [google.com]
        • (Score: 2) by Immerman on Wednesday October 13 2021, @08:27PM

          by Immerman (3985) on Wednesday October 13 2021, @08:27PM (#1186760)

          Oh yeah, lots of other areas for free-body sensor inaccuracies for sure.

          It seems to me that any sort of half-decent virtual training (or testing) system for real-world robot control systems should absolutely add realistic levels of characteristic sensor noise and bias to all inputs in order to train resilience into the system from the ground up. Not to mention "physics noise" (hardness, friction, etc) to approximate the "messiness" of the real world. Maybe even worse-than-realistic noise to (hopefully) improve resiliency. That should be relatively easy to add to a simulation with negligible performance impact.

          Probably one of the most processing-intensive discrepancies to capture effectively would be any sort of vision system, which I suspect from the phrasing is the problem here. Hopefully they're at least including lens distortion on the virtual inputs, that's relatively easy. But on a moving platform the camera/sonar/lidar/etc. images will also suffer from image distortion due to the camera capturing pixels sequentially, and thus being in a slightly different position for each one. Simulating that distortion would require enormous rendering time - either for rendering the world in multiple subframes for an approximation, or to perform per-pixel ray-tracing.

          Of course there's also the visual messiness of the real world to deal with - which from their example it looks like they don't even attempt to approximate, and that may be most of the problem. But I've heard of several virtual training systems that use modern 3D game engines for rendering for exactly that reason. Probably not *nearly* as fast as these folks', but I have my doubts as to just how valuable "very fast simulation" of a world that looks nothing like reality really is. What good is a well-trained AI that can easily handle clean, orderly, by-the-book environments, but not the many messy edge cases that fill the real world?

          Maybe you could use it for pre-training before advancing to a more realistic "finishing" simulation, and eventually the real world, but it seems to me there's a real risk that the pre-training would hit a local maximum that would be extremely difficult to train into something more generally capable.

  • (Score: 2) by Immerman on Wednesday October 13 2021, @05:58PM

    by Immerman (3985) on Wednesday October 13 2021, @05:58PM (#1186719)

    >Researchers blamed inaccuracies in how its sensors perceive the real world compared to the simulation,

    I think they've got that backwards. Decent researchers would have blamed inaccuracies in their simulated sensors compared to the real world operation. The real-world behavior is always 100% accurate. If they failed to consider the reality of how their sensors worked before over-simplifying them in the simulation, that's 100% on them.

(1)