Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday December 28 2017, @04:01PM   Printer-friendly
from the eye-see dept.

A new time-of-flight imaging system could improve computer vision for self-driving vehicles:

In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That's the type of resolution that could make self-driving cars practical.

The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.

At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That's good enough for the assisted-parking and collision-detection systems on today's cars.

But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, "As you increase the range, your resolution goes down exponentially. Let's say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you're back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life."

At distances of 2 meters, the MIT researchers' system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

Cascaded LIDAR using Beat Notes

Rethinking Machine Vision Time of Flight with GHz Heterodyning (open, DOI: 10.1109/ACCESS.2017.2775138) (DX)

LIDAR at MIT Media Lab (2m48s video)

Related: MIT Researchers Improve Kinect 3D Imaging Resolution by 1,000 Times Using Polarization


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by JoeMerchant on Thursday December 28 2017, @05:21PM (1 child)

    by JoeMerchant (3937) on Thursday December 28 2017, @05:21PM (#615166)

    If they're demonstrating this in fiber optic fiber simulations of the real world, great.

    Call me when they've at least taken it out the back door of the lab and demonstrated range measurement accuracy "in the field."

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Runaway1956 on Thursday December 28 2017, @06:25PM

    by Runaway1956 (2926) Subscriber Badge on Thursday December 28 2017, @06:25PM (#615197) Journal

    I'm thinking along those lines myself. The first thing that occurred to me was, that they are using visible light. It's well known that fog eats visible light. Well, scatters it so badly, that it might as well be eaten. Then they introduced the idea of high frequency light? Is it still visible light, or not? TFA isn't clear here - are they talking about ultraviolet light? Well, we've long known that ultraviolet penetrates cloudcover. Or, are they just using light at the upper boundaries of visible light - those colors that only women with special eyes can see.

    But, whatever it is, I'll want the same call when they demonstrate that it works out in the real world. Shooting a ray of light through a fiberoptic cable, and trying to simulate real world conditions just doesn't cut it for me. Mount that bad boy on the roof, hood, or bumper of a car, and demonstrate what it can do.

    Meanwhile, there's no need to do away with infrared, lidar, or any other sensing technologies. Repetitive redundancy could make me trust these self-driving cars a little more. Of course, I'll still want some kind of override in any car that I might consider driving. I don't trust the programmers to plan for every possibility!