Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday March 08 2018, @07:15PM   Printer-friendly
from the no-more-blind-corners? dept.

Stanford researchers develop technique to see objects hidden around corners

A driverless car is making its way through a winding neighborhood street, about to make a sharp turn onto a road where a child's ball has just rolled. Although no person in the car can see that ball, the car stops to avoid it. This is because the car is outfitted with extremely sensitive laser technology that reflects off nearby objects to see around corners.

This scenario is one of many that researchers at Stanford University are imagining for a system that can produce images of objects hidden from view. They are focused on applications for autonomous vehicles, some of which already have similar laser-based systems for detecting objects around the car, but other uses could include seeing through foliage from aerial vehicles or giving rescue teams the ability to find people blocked from view by walls and rubble.

Confocal non-line-of-sight imaging based on the light-cone transform (DOI: 10.1038/nature25489) (DX)

Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS [(Non Line Of Sight)] imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.


Original Submission

Related Stories

Contrary To Musk's Claims, Lidar Has Some Advantages In Self Driving Technology 48 comments

Lots of companies are working to develop self-driving cars. And almost all of them use lidar, a type of sensor that uses lasers to build a three-dimensional map of the world around the car. But Tesla CEO Elon Musk argues that these companies are making a big mistake. "They're all going to dump lidar," Elon Musk said at an April event showcasing Tesla's self-driving technology. "Anyone relying on lidar is doomed."

"Lidar is really a shortcut," added Tesla AI guru Andrej Karpathy. "It sidesteps the fundamental problems of visual recognition that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch."

In recent weeks I asked a number of experts about these claims. And I encountered a lot of skepticism. "In a sense all of these sensors are crutches," argued Greg McGuire, a researcher at MCity, the University of Michigan's testing ground for autonomous vehicles. "That's what we build, as engineers, as a society—we build crutches."

Self-driving cars are going to need to be extremely safe and reliable to be accepted by society, McGuire said. And a key principle for high reliability is redundancy. Any single sensor will fail eventually. Using several different types of sensors makes it less likely that a single sensor's failure will lead to disaster.

"Once you get out into the real world, and get beyond ideal conditions, there's so much variability," argues industry analyst (and former automotive engineer) Sam Abuelsamid. "It's theoretically possible that you can do it with cameras alone, but to really have the confidence that the system is seeing what it thinks it's seeing, it's better to have other orthogonal sensing modes"—sensing modes like lidar.

Previously: Robo-Taxis and 'the Best Chip in the World'

Related: Affordable LIDAR Chips for Self-Driving Vehicles
Why Experts Believe Cheaper, Better Lidar is Right Around the Corner
Stanford Researchers Develop Non-Line-of-Sight LIDAR Imaging Procedure
Self Driving Cars May Get a New (non LiDAR) Way to See
Nikon Will Help Build Velodyne's Lidar Sensors for Future Self-Driving Cars


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by DannyB on Thursday March 08 2018, @07:25PM (6 children)

    by DannyB (5839) Subscriber Badge on Thursday March 08 2018, @07:25PM (#649641) Journal

    Self driving cars can see around corners. Unlike puny humans.

    Ah, that sounds so much better than saying that killer robots can see around corners. Or fighting vehicles in an urban environment. Or the unmarked van parked out front that can't quite see all of your back yard.

    --
    The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
    • (Score: 4, Informative) by takyon on Thursday March 08 2018, @07:32PM (2 children)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Thursday March 08 2018, @07:32PM (#649646) Journal

      An autonomous car kind of is a killer robot.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by bob_super on Thursday March 08 2018, @08:30PM

      by bob_super (1357) on Thursday March 08 2018, @08:30PM (#649693)

      That's not a problem, silly person.
      Soon enough, only bad guys will hide in the street, while our courageous men pick them off from drones, bots and armored vehicles. The inferiority of human vision is an asset for the Good Guys (TM).
      As far as being naked your backyard goes, we already have full sat and drone coverage. You may want to get that mole checked.

    • (Score: 2) by realDonaldTrump on Thursday March 08 2018, @08:47PM

      by realDonaldTrump (6614) on Thursday March 08 2018, @08:47PM (#649705) Journal

      The Germans had a VERY CROOKED gun. So they could shoot around corners.

    • (Score: 0) by Anonymous Coward on Friday March 09 2018, @03:57AM

      by Anonymous Coward on Friday March 09 2018, @03:57AM (#649847)

      Humans also can't see backwards, yet cars now come with back-facing cameras to be viewed by humans.

      The same goes for IR, radar, etc.

  • (Score: 1) by cocaine overdose on Thursday March 08 2018, @07:52PM (2 children)

    I was going to bitch about the safety of LIDAR, but the experiments are pay-walled, and there's no access to the laser specs. But I did find that the few other experiments on NLOS require Class 3B lasers, which is a no-no boo-boo that could pave the way to taking all non-automated cars off the road, when the amount of accidents and human damage starts piling up after such a tiny detail was withheld. And now another unfortunate event, is that most automated LIDAR sensors use Class 1 lasers (like Waylmao), and EdisonTM doesn't even use LIDAR. Foiled once again, by the God-fearing Velodine and their infinite JS loops.

    • (Score: 1, Informative) by Anonymous Coward on Thursday March 08 2018, @08:22PM (1 child)

      by Anonymous Coward on Thursday March 08 2018, @08:22PM (#649688)

      They mention this in the paper:

      First, to reduce acquisition time, a more powerful laser is needed. For eye-safe operation, this laser may need to operate in the short-wave infrared regime.
      ...[to supplements]...
      The light source (ALPHALAS PICOPOWER-LD-670-50) consists of a 670nm wavelength pulsed laser diode with a reported pulse width of 30.6psat a 10MHz repetition rate and 0.11mW average power.

      https://www.nature.com/articles/nature25489 [nature.com]

  • (Score: 1, Interesting) by Anonymous Coward on Thursday March 08 2018, @08:03PM

    by Anonymous Coward on Thursday March 08 2018, @08:03PM (#649673)

    I wonder if this is the same use of "confocal" as the microscope invented by Marvin Minsky in 1955--before his major work in AI?
    https://web.media.mit.edu/~minsky/papers/ConfocalMemoir.html [mit.edu]
    Seems likely, here's a small cutting from his memoir,

    An ideal microscope would examine each point of the specimen and measure the amount of light scattered or absorbed by that point. But if we try to make many such measurements at the same time then every focal image point will be clouded by aberrant rays of scattered light deflected points of the specimen that are not the point you're looking at. Most of those extra rays would be gone if we could illuminate only one specimen point at a time. There is no way to eliminate every possible such ray, because of multiple scattering, but it is easy to remove all rays not initially aimed at the focal point; just use a second microscope (instead of a condenser lens) to image a pinhole aperture on a single point of the specimen.

  • (Score: 3, Insightful) by Arik on Thursday March 08 2018, @08:39PM (2 children)

    by Arik (4543) on Thursday March 08 2018, @08:39PM (#649700) Journal
    Shining lasers around as you drive through the city sure sounds like a good way to damage some eyeballs. Even if the laser is weak enough it doesn't damage them noticeably at first, long term affects should still be an issue.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 0) by Anonymous Coward on Thursday March 08 2018, @09:51PM

      by Anonymous Coward on Thursday March 08 2018, @09:51PM (#649731)

      I sure hope the goggles are able to do something...

    • (Score: 2) by hemocyanin on Friday March 09 2018, @12:26AM

      by hemocyanin (186) on Friday March 09 2018, @12:26AM (#649788) Journal

      As an IANAD (doctor) I wonder if it will work that way. In my rudimentary way of thinking, I can place my hand on a piece of metal that's 100 F for as long as I want and never suffer any sort of burn, but if I do the same with one that's at 500 F, I will almost instantly get a severe burn. So it seems there is some threshold effect where I could be exposed to massive amounts of energy at some particular level and experience no ill effect, but once a threshold is crossed, the effect is immediate. I suppose there is a middle ground -- if I grab a piece of metal at 120 F and don't let go, how long will it take to get a burn? If I grab that piece of metal for one second and let go for 10 -- can I do that essentially forever without getting burned?

      So blah blah blah -- what do we have here? Lasers that are safe no matter how long you stare at them? Ones that are instantly damaging? Or ones that will cause damage with sufficient continuous exposure?

  • (Score: 0) by Anonymous Coward on Friday March 09 2018, @06:04AM

    by Anonymous Coward on Friday March 09 2018, @06:04AM (#649868)

    Has anyone heard anything about how well all these radar and LIDAR systems operate with each other. Will a street full of them cause interference with each other?

    These systems work outside of human visual range, but other animals have different ranges. What is all our 'light' pollution doing to everything else?

  • (Score: 2) by shortscreen on Friday March 09 2018, @07:23AM

    by shortscreen (2252) on Friday March 09 2018, @07:23AM (#649880) Journal

    Sample a 2D array of points and then calculate the attributes of the 3D scene, instead of the other way around.

    If they can get this working in realtime, would the same hardware be suitable for realtime raytracing?

(1)