Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday August 07 2019, @10:55PM   Printer-friendly
from the liedurr dept.

Lots of companies are working to develop self-driving cars. And almost all of them use lidar, a type of sensor that uses lasers to build a three-dimensional map of the world around the car. But Tesla CEO Elon Musk argues that these companies are making a big mistake. "They're all going to dump lidar," Elon Musk said at an April event showcasing Tesla's self-driving technology. "Anyone relying on lidar is doomed."

"Lidar is really a shortcut," added Tesla AI guru Andrej Karpathy. "It sidesteps the fundamental problems of visual recognition that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch."

In recent weeks I asked a number of experts about these claims. And I encountered a lot of skepticism. "In a sense all of these sensors are crutches," argued Greg McGuire, a researcher at MCity, the University of Michigan's testing ground for autonomous vehicles. "That's what we build, as engineers, as a society—we build crutches."

Self-driving cars are going to need to be extremely safe and reliable to be accepted by society, McGuire said. And a key principle for high reliability is redundancy. Any single sensor will fail eventually. Using several different types of sensors makes it less likely that a single sensor's failure will lead to disaster.

"Once you get out into the real world, and get beyond ideal conditions, there's so much variability," argues industry analyst (and former automotive engineer) Sam Abuelsamid. "It's theoretically possible that you can do it with cameras alone, but to really have the confidence that the system is seeing what it thinks it's seeing, it's better to have other orthogonal sensing modes"—sensing modes like lidar.

Previously: Robo-Taxis and 'the Best Chip in the World'

Related: Affordable LIDAR Chips for Self-Driving Vehicles
Why Experts Believe Cheaper, Better Lidar is Right Around the Corner
Stanford Researchers Develop Non-Line-of-Sight LIDAR Imaging Procedure
Self Driving Cars May Get a New (non LiDAR) Way to See
Nikon Will Help Build Velodyne's Lidar Sensors for Future Self-Driving Cars


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday August 08 2019, @01:02AM (3 children)

    by Anonymous Coward on Thursday August 08 2019, @01:02AM (#877283)

    This got me thinking about failure modes of self-driving cars from the perspective of other drivers. As the article points out, sensors do fail which is why redundancy is important. But considering the worst edge case possible in which multiple systems fail – including the driver – I would assume some sort of fail safe mode would take control to bring the vehicle to a safe stop. In such an instance it would seem self-driving cars would need some type of visual alert system other than the conventional four-way hazard lights to allow surrounding drivers to yield as quickly as possible. I've seen instances in my daily commute where someone turned on their hazards but still had difficulty moving to the shoulder because other commuters did not yield. I've also seen cars driving down the freeway with their hazards on with traffic flowing around them as if everything was perfectly normal (I'm guilty of this myself). Perhaps we've become a bit numb to hazard lights or they've been used casually so frequently as to diminish their importance. Thus my question of should self-driving cars have a new type of visual warning system; one that cannot be triggered manually by the driver?

  • (Score: 0) by Anonymous Coward on Thursday August 08 2019, @12:43PM

    by Anonymous Coward on Thursday August 08 2019, @12:43PM (#877419)

    Looking further down the road (sorry), if we ever get to the place where all cars are self-driving (which I doubt), zombie cars need to let the functioning cars know that they have a problem. Perhaps by the V2V (vehicle-to-vehicle) short-range secure communication channel that keeps getting mentioned...

    There are a lot of problems yet to be solved to make this version of the future work.

  • (Score: 2) by legont on Thursday August 08 2019, @07:20PM (1 child)

    by legont (4179) on Thursday August 08 2019, @07:20PM (#877612)

    My biggest question is how the car is going to triage between the owner/occupant life and a "small object" running across the street. Is it going to err on my side or the child's. Assuming it does somehow, would it be disclosed and adjustable by owners. What about adjustable by hackers?

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 0) by Anonymous Coward on Friday August 09 2019, @03:51AM

      by Anonymous Coward on Friday August 09 2019, @03:51AM (#877763)

      One analysis of the Uber pedestrian fatality that I read noted that the developers had turned off at least some aspects of the auto-braking system(s). It was crying-wolf much too often, slamming on the brakes for nothing, and potentially causing a lot of following cars to rear-end the self-driving cars.

      My interpretation: Rather than improve the auto-braking algorithms to reduce the false alarms, the developers (or their managers??) turned them off so they could focus on other aspects of self-driving. We are a long way from baking ethics into self driving systems if they can't determine that it's OK to hit a plastic bag (or tumble weed) blowing across the road.