Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday September 20 2017, @06:14AM   Printer-friendly
from the greased-lightning dept.

Phys.org and other sites report on a new type of camera that is extremely fast, it looks for the slope of intensity at individual pixels and at the same time requires much less bandwidth than a conventional video camera,
    https://phys.org/news/2017-02-ultrafast-camera-self-driving-vehicles-drones.html
Said to be useful for any type of real time use, in particular self-driving cars.

From the company site, http://www.hillhouse-tech.com/

Each pixel in our sensor can individually monitor the slope of change in light intensity and report an event if a threshold is reached. Row and column arbitration circuits process the pixel events and make sure only one is granted to access the output port at a time in a fairly ordered manner when they receive multiple requests simultaneously. The response time to the pixel event is at nanosecond scale. As such, the sensor can be tuned to capture motion objects with speed faster than a certain threshold. The speed of the sensor is not limited by any traditional concept such as exposure time, frame rate, etc. It can detect fast motion which is traditionally captured by expensive, high speed cameras running at tens of thousands frames per second and at the same time produces 1000x less of data.

Sounds sort of like an eye (human or animal), which has a lot of hardware (wetware?) processing directly behind the retina and only sends a relatively slow data rate to the brain.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Offtopic) by c0lo on Wednesday September 20 2017, @08:01AM (5 children)

    by c0lo (156) Subscriber Badge on Wednesday September 20 2017, @08:01AM (#570555) Journal

    What have happen with Dick N...?
    This FA is missing relevant opinions already!

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 1, Offtopic) by c0lo on Wednesday September 20 2017, @12:12PM (4 children)

      by c0lo (156) Subscriber Badge on Wednesday September 20 2017, @12:12PM (#570590) Journal

      No, seriously guys.
      What are you thinking when you post stories so "interesting" that, 2 hours after publishing it, there's no comment?

      How long do you think S/N can survive if the level of the stories are "Yeah, immm, nteresting. Sorta. Maybe. Meh... can't be bothered to think of it enough to post a comment"?
      I mean, come on! Not even the submitter can find 3 minutes to explain why did s/he find it worthy for submission? What is S/N, a place where stories are aborted after 6 months of gestation in the queue? Doesn't the "parent" feel any responsibility for the "child" abandoned on the steps of S/N "cathedral"?

      Look, take IBM Simulates Beryllium Hydride Molecule Using a Quantum Computer [soylentnews.org] - after 3 hours since publishing, the only comment there is a "Dick niɡցers" one.
      The one after it - Science Magazine Interview With European Southern Observatory Chief [soylentnews.org] - one hour plus and no comment.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2, Informative) by Anonymous Coward on Wednesday September 20 2017, @12:29PM (3 children)

        by Anonymous Coward on Wednesday September 20 2017, @12:29PM (#570594)

        Do you have a crystal ball that can predict the interest in a particular story?

        I submit 3-5 articles/week (very rough long term average), and most are not political. Most of them are published. Perhaps 3 in 10 generate 40+ comments(?), and sometimes the comments head off on a tangent unrelated to the submission. How do you decide what will generate discussion?

        I submitted this one because the idea of fast processing directly coupled to the image sensor was new to me (except from reading about biological systems). While there are only a couple of comments below, they are interesting. Maybe when the USA West Coast wakes up there will be some more interest?

        • (Score: 1, Offtopic) by c0lo on Wednesday September 20 2017, @01:01PM (2 children)

          by c0lo (156) Subscriber Badge on Wednesday September 20 2017, @01:01PM (#570602) Journal

          Do you have a crystal ball that can predict the interest in a particular story?

          No, I don't.
          But (and please take it as constructive criticism) the summary is non-inviting - at least for me. I don't know a lot of things for the summary to ring any bells and I'm seeing very little support for "trust me, there is something about it. Here, have some threads you may want to follow"

          Examples of places where such insertions may make TFS more attractive/inviting:

          Said to be useful for any type of real time use, in particular self-driving cars.

          Sound like "Yeah, there are some rumours it may have practical application in self-serving cars, the technology du jour where money are typically spend".
          When in fact, the phys.org article goes quite a bit in explaining why this use of the technology is attractive (one of these reasons: "Unlike typical optical cameras, which can be blinded by bright light and unable to make out details in the dark, NTU's new smart camera can record the slightest movements and objects in real time.")

          ---

          From the company site,

          [etc]

          Ah, so this is useful to take photoshots of bullets? Or what?
          Is it something really cool or, given that is on "the company site", just vapourware? (the latter is most common these days)

          (when, in fact, the phys.org attributes the invention to Nanyang Technological University, Singapore - thus it's credible)

          ---

          Sounds sort of like an eye (human or animal), which has a lot of hardware (wetware?) processing directly behind the retina and only sends a relatively slow data rate to the brain.

          Is this a hypothesis of yours or can you back it with some citations?

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 2, Insightful) by Anonymous Coward on Wednesday September 20 2017, @01:35PM

            by Anonymous Coward on Wednesday September 20 2017, @01:35PM (#570608)

            > ...please take it as constructive criticism

            Did you wake up on the wrong side of bed today?

            In your post you present significant information that could contribute to and improve the discussion, beyond what the original submitter/editors put in tfs. But instead of a normal reply you chose to go "meta" and turn your links and comments into a whine... Actually, it's worse than a whine, since you are complaining about volunteers.

          • (Score: 0, Interesting) by Anonymous Coward on Wednesday September 20 2017, @02:21PM

            by Anonymous Coward on Wednesday September 20 2017, @02:21PM (#570615)
            Here's some constructive criticism for you: shut up if you have nothing to interesting to say or add about the story.

            The story is somewhat interesting but seems most soylentils had nothing interesting to add to it and fortunately most didn't decide to post off-topic crap.

            If you want troll fests and zillions of crap comments by people who don't know shit please go to Slashdot.
  • (Score: 3, Informative) by Bot on Wednesday September 20 2017, @08:17AM (5 children)

    by Bot (3902) on Wednesday September 20 2017, @08:17AM (#570556) Journal

    This kind of data, an ordered stream of changes, is inherently optimized compared to raw capture, might simplify the encoding of the information too, instead of making the cam look at the whole picture each frame.

    So
    > no exposure probs
    > no framerate probs
    > easier encoding
    > small uncompressed stream

    potentially (the key word here) a game changer that redefines the terms photography and videography

    --
    Account abandoned.
    • (Score: 3, Insightful) by Shinobi on Wednesday September 20 2017, @08:39AM (4 children)

      by Shinobi (6707) on Wednesday September 20 2017, @08:39AM (#570561)

      This is more aimed at machine vision, given the modes etc. Note that none of the demos shows colour photography/video for the Celex. Resolution also seems lower than video and photography people want.

      However, if you could somehow combine this with lightfield cameras that'd be lovely...

      • (Score: 2) by jcross on Wednesday September 20 2017, @02:17PM (3 children)

        by jcross (4009) on Wednesday September 20 2017, @02:17PM (#570613)

        Well yeah you'd expect the first iteration of a new camera technology to sacrifice resolution and stuff, and machine vision is a great starter market because the images only need to be functional, unlike the consumer market where they need to be beautiful. That's an interesting idea combining it with lightfield imaging, because although that also sacrifices resolution like crazy, you could have a monocular camera with lightning fast 3D imaging and infinite depth of field. Lots of image processing required, but if it were built into the sensor... I'm having a hard time thinking of good low-res applications for it beyond "it would be super cool", but maybe for giving fly-like powers to drones? If the resolution could be made better, I'm sure taking 3D pictures and videos with phones using only one lens and no focus mechanism could be pretty popular too.

        • (Score: 1) by Shinobi on Wednesday September 20 2017, @05:31PM (2 children)

          by Shinobi (6707) on Wednesday September 20 2017, @05:31PM (#570727)

          My idea for it was sort of a two-sensor approach. This CeleX sensor to catch motion data etc, and the lightfield to catch a lightmap, and then merge them afterwards.

          • (Score: 2) by jcross on Wednesday September 20 2017, @06:52PM (1 child)

            by jcross (4009) on Wednesday September 20 2017, @06:52PM (#570776)

            Current lightfield cameras are just an array of tiny fish-eye lenses over a standard camera sensor, so I don't see why you couldn't take the same approach and monitor every point in the lightfield for changes in intensity. It seems easier than making sure the same light gets to two sensors. And as long as they're fabricating specialized sensors anyway, they could do away with the wasted pixels in a lightfield array (the ones in between the circles of the little lenses). That space could even be used for signal pre-processing circuitry or something.

            • (Score: 1) by Shinobi on Wednesday September 20 2017, @09:52PM

              by Shinobi (6707) on Wednesday September 20 2017, @09:52PM (#570864)

              My thought was to make use of the high compression rate and effective frame rate of the CeleX sensor, and use the lightfield data to add colour afterwards, at a lower frame rate.

  • (Score: 2) by crafoo on Wednesday September 20 2017, @04:24PM (1 child)

    by crafoo (6639) on Wednesday September 20 2017, @04:24PM (#570679)

    This is pretty cool, and is a nice trend to moving computing away from a few central, general purpose processors and into specialized devices closer to the sensors and actuators of a system. This in particular seems like it could be expanded to trigger events back to the CPU (general decision maker): Is something fast moving from left to right across my vision?

    • (Score: 0) by Anonymous Coward on Thursday September 21 2017, @03:27AM

      by Anonymous Coward on Thursday September 21 2017, @03:27AM (#570968)

      > Is something fast moving from left to right across my vision?

      From memory, I think this is one of the main ideas in Lettvin's famous paper, 1959 "What the Frog's Eye Tells the Frog's Brain", Proceedings of the IRE, Vol. 47, No. 11, November; (with Maturana, McCulloch, and Pitts) -- citation taken from https://en.wikipedia.org/wiki/Jerome_Lettvin#Published_papers [wikipedia.org]

      tl;dr summary: Frog eyes have internal processing that is very good at detecting motion--presumably for catching flies.

(1)