Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Wednesday September 20 2017, @06:14AM   Printer-friendly
from the greased-lightning dept.

Phys.org and other sites report on a new type of camera that is extremely fast, it looks for the slope of intensity at individual pixels and at the same time requires much less bandwidth than a conventional video camera,
    https://phys.org/news/2017-02-ultrafast-camera-self-driving-vehicles-drones.html
Said to be useful for any type of real time use, in particular self-driving cars.

From the company site, http://www.hillhouse-tech.com/

Each pixel in our sensor can individually monitor the slope of change in light intensity and report an event if a threshold is reached. Row and column arbitration circuits process the pixel events and make sure only one is granted to access the output port at a time in a fairly ordered manner when they receive multiple requests simultaneously. The response time to the pixel event is at nanosecond scale. As such, the sensor can be tuned to capture motion objects with speed faster than a certain threshold. The speed of the sensor is not limited by any traditional concept such as exposure time, frame rate, etc. It can detect fast motion which is traditionally captured by expensive, high speed cameras running at tens of thousands frames per second and at the same time produces 1000x less of data.

Sounds sort of like an eye (human or animal), which has a lot of hardware (wetware?) processing directly behind the retina and only sends a relatively slow data rate to the brain.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Shinobi on Wednesday September 20 2017, @05:31PM (2 children)

    by Shinobi (6707) on Wednesday September 20 2017, @05:31PM (#570727)

    My idea for it was sort of a two-sensor approach. This CeleX sensor to catch motion data etc, and the lightfield to catch a lightmap, and then merge them afterwards.

  • (Score: 2) by jcross on Wednesday September 20 2017, @06:52PM (1 child)

    by jcross (4009) on Wednesday September 20 2017, @06:52PM (#570776)

    Current lightfield cameras are just an array of tiny fish-eye lenses over a standard camera sensor, so I don't see why you couldn't take the same approach and monitor every point in the lightfield for changes in intensity. It seems easier than making sure the same light gets to two sensors. And as long as they're fabricating specialized sensors anyway, they could do away with the wasted pixels in a lightfield array (the ones in between the circles of the little lenses). That space could even be used for signal pre-processing circuitry or something.

    • (Score: 1) by Shinobi on Wednesday September 20 2017, @09:52PM

      by Shinobi (6707) on Wednesday September 20 2017, @09:52PM (#570864)

      My thought was to make use of the high compression rate and effective frame rate of the CeleX sensor, and use the lightfield data to add colour afterwards, at a lower frame rate.