Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday August 07 2019, @10:55PM   Printer-friendly
from the liedurr dept.

Lots of companies are working to develop self-driving cars. And almost all of them use lidar, a type of sensor that uses lasers to build a three-dimensional map of the world around the car. But Tesla CEO Elon Musk argues that these companies are making a big mistake. "They're all going to dump lidar," Elon Musk said at an April event showcasing Tesla's self-driving technology. "Anyone relying on lidar is doomed."

"Lidar is really a shortcut," added Tesla AI guru Andrej Karpathy. "It sidesteps the fundamental problems of visual recognition that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch."

In recent weeks I asked a number of experts about these claims. And I encountered a lot of skepticism. "In a sense all of these sensors are crutches," argued Greg McGuire, a researcher at MCity, the University of Michigan's testing ground for autonomous vehicles. "That's what we build, as engineers, as a society—we build crutches."

Self-driving cars are going to need to be extremely safe and reliable to be accepted by society, McGuire said. And a key principle for high reliability is redundancy. Any single sensor will fail eventually. Using several different types of sensors makes it less likely that a single sensor's failure will lead to disaster.

"Once you get out into the real world, and get beyond ideal conditions, there's so much variability," argues industry analyst (and former automotive engineer) Sam Abuelsamid. "It's theoretically possible that you can do it with cameras alone, but to really have the confidence that the system is seeing what it thinks it's seeing, it's better to have other orthogonal sensing modes"—sensing modes like lidar.

Previously: Robo-Taxis and 'the Best Chip in the World'

Related: Affordable LIDAR Chips for Self-Driving Vehicles
Why Experts Believe Cheaper, Better Lidar is Right Around the Corner
Stanford Researchers Develop Non-Line-of-Sight LIDAR Imaging Procedure
Self Driving Cars May Get a New (non LiDAR) Way to See
Nikon Will Help Build Velodyne's Lidar Sensors for Future Self-Driving Cars


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Thursday August 08 2019, @09:43AM (2 children)

    by khallow (3766) Subscriber Badge on Thursday August 08 2019, @09:43AM (#877385) Journal

    It's a SENSOR.

    To be fair, it's a combination energy projection and sensor system, an active system as opposed to a passive system which doesn't require such energy projection (except under the usual low light conditions that a human driver would experience as well).

    As to more sensors complementing each other, let us not forget the large, synergistic downside - more failure modes. All this redundancy sounds great, but what happens when it creates more simple opportunities for a car to become undrivable? I think that ultimately is the catch. The more "redundant" systems you add to a car, the more "This car can't drive" failure modes you create. And keep in mind that people are notorious for driving cars with serious problems. If they can drive with broken sensors (and thus, reduced redundancy), they will.

    Presently, supplementary external sensors to a human driver aren't essential. They can to the very last one be disabled without creating a car that can't drive. But with an autonomous car, those additional sensors create additional liability. In particular, whatever else you can say about lidar, it remains that it has two components that need to work in order for it to function rather than the one component of a passive video system.

    A vehicle with a small number of critical sensors (that is, where a single failure makes the car undrivable for either safety or liability reasons) is going to have better operational reliability than a system with lots of critical sensors, even if all those sensors are individually somewhat more reliable. If I double the number of critical sensors (assuming all have equal rate of failure), I need to halve the likelihood of failure of those sensors in order to maintain the same reliability.

    I think that's the calculus behind Tesla's decision.

  • (Score: 0) by Anonymous Coward on Thursday August 08 2019, @12:31PM (1 child)

    by Anonymous Coward on Thursday August 08 2019, @12:31PM (#877414)

    fail on ignorance--

    Tesla currently uses both cameras and radar (passive and active in your usage). Here's a Tesla forum post on the radar, https://teslamotorsclub.com/tmc/threads/where-is-the-front-radar-on-hw2-cars-located.87415/ [teslamotorsclub.com]

    If you look down a few posts, it appears that the cruise control in some models of Tesla depends on the radar working...and the radar doesn't work if covered with snow/ice.

    • (Score: 1) by khallow on Friday August 09 2019, @12:41AM

      by khallow (3766) Subscriber Badge on Friday August 09 2019, @12:41AM (#877694) Journal

      Tesla currently uses both cameras and radar (passive and active in your usage).

      Still means two more systems that have to work on top of what they already have. And as I noted, now you need five systems to work, not three.

      If you look down a few posts, it appears that the cruise control in some models of Tesla depends on the radar working...and the radar doesn't work if covered with snow/ice.

      In other words, a failure mode. I mentioned that happens. It's not like a lidar equipped car would be racing along in those conditions either.