Phys.org and other sites report on a new type of camera that is extremely fast, it looks for the slope of intensity at individual pixels and at the same time requires much less bandwidth than a conventional video camera,
https://phys.org/news/2017-02-ultrafast-camera-self-driving-vehicles-drones.html
Said to be useful for any type of real time use, in particular self-driving cars.
From the company site, http://www.hillhouse-tech.com/
Each pixel in our sensor can individually monitor the slope of change in light intensity and report an event if a threshold is reached. Row and column arbitration circuits process the pixel events and make sure only one is granted to access the output port at a time in a fairly ordered manner when they receive multiple requests simultaneously. The response time to the pixel event is at nanosecond scale. As such, the sensor can be tuned to capture motion objects with speed faster than a certain threshold. The speed of the sensor is not limited by any traditional concept such as exposure time, frame rate, etc. It can detect fast motion which is traditionally captured by expensive, high speed cameras running at tens of thousands frames per second and at the same time produces 1000x less of data.
Sounds sort of like an eye (human or animal), which has a lot of hardware (wetware?) processing directly behind the retina and only sends a relatively slow data rate to the brain.
(Score: 2) by jcross on Wednesday September 20 2017, @02:17PM (3 children)
Well yeah you'd expect the first iteration of a new camera technology to sacrifice resolution and stuff, and machine vision is a great starter market because the images only need to be functional, unlike the consumer market where they need to be beautiful. That's an interesting idea combining it with lightfield imaging, because although that also sacrifices resolution like crazy, you could have a monocular camera with lightning fast 3D imaging and infinite depth of field. Lots of image processing required, but if it were built into the sensor... I'm having a hard time thinking of good low-res applications for it beyond "it would be super cool", but maybe for giving fly-like powers to drones? If the resolution could be made better, I'm sure taking 3D pictures and videos with phones using only one lens and no focus mechanism could be pretty popular too.
(Score: 1) by Shinobi on Wednesday September 20 2017, @05:31PM (2 children)
My idea for it was sort of a two-sensor approach. This CeleX sensor to catch motion data etc, and the lightfield to catch a lightmap, and then merge them afterwards.
(Score: 2) by jcross on Wednesday September 20 2017, @06:52PM (1 child)
Current lightfield cameras are just an array of tiny fish-eye lenses over a standard camera sensor, so I don't see why you couldn't take the same approach and monitor every point in the lightfield for changes in intensity. It seems easier than making sure the same light gets to two sensors. And as long as they're fabricating specialized sensors anyway, they could do away with the wasted pixels in a lightfield array (the ones in between the circles of the little lenses). That space could even be used for signal pre-processing circuitry or something.
(Score: 1) by Shinobi on Wednesday September 20 2017, @09:52PM
My thought was to make use of the high compression rate and effective frame rate of the CeleX sensor, and use the lightfield data to add colour afterwards, at a lower frame rate.