Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday December 28 2017, @04:01PM   Printer-friendly
from the eye-see dept.

A new time-of-flight imaging system could improve computer vision for self-driving vehicles:

In a new paper appearing in IEEE Access, members of the Camera Culture group present a new approach to time-of-flight imaging that increases its depth resolution 1,000-fold. That's the type of resolution that could make self-driving cars practical.

The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.

At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That's good enough for the assisted-parking and collision-detection systems on today's cars.

But as Achuta Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences and first author on the paper, explains, "As you increase the range, your resolution goes down exponentially. Let's say you have a long-range scenario, and you want your car to detect an object further away so it can make a fast update decision. You may have started at 1 centimeter, but now you're back down to [a resolution of] a foot or even 5 feet. And if you make a mistake, it could lead to loss of life."

At distances of 2 meters, the MIT researchers' system, by contrast, has a depth resolution of 3 micrometers. Kadambi also conducted tests in which he sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to his system. Those tests suggest that at a range of 500 meters, the MIT system should still achieve a depth resolution of only a centimeter.

Cascaded LIDAR using Beat Notes

Rethinking Machine Vision Time of Flight with GHz Heterodyning (open, DOI: 10.1109/ACCESS.2017.2775138) (DX)

LIDAR at MIT Media Lab (2m48s video)

Related: MIT Researchers Improve Kinect 3D Imaging Resolution by 1,000 Times Using Polarization


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by urza9814 on Thursday December 28 2017, @06:18PM

    by urza9814 (3954) on Thursday December 28 2017, @06:18PM (#615192) Journal

    In reading the summary it sounds like they figured out a way to make cameras for self-driving cars more accurate and longer ranges. However, I can't reconcile that with Time of Flight [wikipedia.org], which sounds like a measurement of time rather than of distance or precision.

    Article made me think of 'time of flight range sensors' like the following:
    https://www.sparkfun.com/products/12785 [sparkfun.com]

    Now, I think that one only gives one reading -- it sends out a signal (IR or ultrasonic in the ones I've seen), and measures how much time it takes for the signal to bounce back, and reports the distance it would cover in that time. In this case it seems like they're using something more like the X-Box Kinect which gives a 2D image but it uses the same general principle. So it's a 2D camera that senses distance based on the time it takes for a signal to reflect rather than sensing light/color intensity. Probably the beam/sensor is going to cover some angle, so the further away you get the less accurate it becomes as described in TFS.

    Here's a page describing the Kinect's system in a bit more detail if you're interested:
    https://blogs.technet.microsoft.com/microsoft_blog/2013/10/02/collaboration-expertise-produce-enhanced-sensing-in-xbox-one/ [microsoft.com]

    What I'm curious about though is how you prevent different cars using the same tech from interfering with each other...seems like you'd need to essentially 'encrypt' the signal and filter out any incoming signals that aren't encoded by your own sensor...and also if you're using IR for example, what happens if the car encounters any material that absorbs IR?

    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4