Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Tuesday April 07, @06:18AM   Printer-friendly

Contrary to long-standing beliefs, motion from eye movements helps the brain perceive depth—a finding that could enhance virtual reality:

When you go for a walk, how does your brain know the difference between a parked car and a moving car? This seemingly simple distinction is challenging because eye movements, such as the ones we make when watching a car pass by, make even stationary objects move across the retina—motion that has long been thought of as visual "noise" the brain must subtract out.

Now, researchers at the University of Rochester have discovered that instead of being meaningless interference, the visual motion of an image caused by eye movements helps us understand the world. The specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space.

"The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance," says Greg DeAngelis, [...] "But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world."

[...] "We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements," says DeAngelis. "Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object's motion and depth."

This research has important implications for understanding visual perception, which informs how the brain interprets everyday activities like reading and recognizing faces. But it could also provide insight and new applications for visual technologies, such as virtual reality headsets.

"VR headsets don't factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making," says DeAngelis. This could be what causes some people to experience motion sickness while using a VR headset.

Journal Reference: Xu, ZX., Pang, J., Anzai, A. et al. Flexible computation of object motion and depth based on viewing geometry inferred from optic flow. Nat Commun 17, 1092 (2026). https://doi.org/10.1038/s41467-025-67857-4


Original Submission

This discussion was created by janrinok (52) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by JoeMerchant on Tuesday April 07, @01:48PM (5 children)

    by JoeMerchant (3937) on Tuesday April 07, @01:48PM (#1439172)

    Our brains analyze the input they are given. They evolve to maximize chances of successful reproduction based on the input as it is made available.

    Some "optical stabilization" is built into the system via the eye muscles - tracking targets, rotating for level horizon, etc. but... it's all information and processing the available information to best advantage is what billions of years of evolution has driven it to do.

    --
    🌻🌻🌻🌻 [google.com]
    • (Score: 2) by mcgrew on Wednesday April 08, @01:08AM (1 child)

      by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday April 08, @01:08AM (#1439219) Homepage Journal

      A lot of things have to do with it. Your brain does a lot of impressive math without you even knowing it's doing math; part of 3D vision is how much muscle pressure is needed to focus. You no more need to think about it than you need to think about scratching your ass.

      I guess the kids who did the study never had a View Master. [mattel.com]

      --
      Are the Republicans really in favor of genocide, or are they just cowards terrified of terrorist twit Trump?
      • (Score: 3, Interesting) by JoeMerchant on Wednesday April 08, @01:15AM

        by JoeMerchant (3937) on Wednesday April 08, @01:15AM (#1439221)

        > part of 3D vision is how much muscle pressure is needed to focus.

        Yup, then my cataract / lens replacement surgery went in and cut all the muscle connections to the lens - so that's gone now.

        I just had an eye appointment, vision tested out at 20/15 - and I had to explain to the doc that it still sucks compared to what I had before... Yeah, if I can settle in on central focus and get all the floaters out of the way then I can read a 20/15 chart line, but when I was signing in at the front desk I couldn't visually process a whole 8.5" x 11" sheet on a clipboard without consciously scanning the whole thing. OEM vision would have seen the bottom of the page without having to think about it and do the bobble-head maneuver.

        --
        🌻🌻🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Wednesday April 08, @01:38AM (2 children)

      by Anonymous Coward on Wednesday April 08, @01:38AM (#1439227)

      Yeah. IMO the mind uses multiple images to construct 3D. Whether the multiple images come from eye movements or other movement is not the important point.

      See also: https://en.wikipedia.org/wiki/Kinetic_depth_effect [wikipedia.org]

      Heck many minds don't even need optical images to construct the 3D. We can even use our ears or other senses (touch). If we hear various echoes from a room (from snapping fingers, tongue clicks[1] etc), we can get a 3d "image" of the room, not a high res one of course but...

      [1] https://en.wikipedia.org/wiki/Human_echolocation [wikipedia.org]

      • (Score: 2) by JoeMerchant on Wednesday April 08, @02:43AM (1 child)

        by JoeMerchant (3937) on Wednesday April 08, @02:43AM (#1439230)

        > IMO the mind uses multiple images to construct 3D

        The mind uses whatever it's given.

        One technique of determining 3D depth is useful in moving fields - nearer things move bigger angular displacements than further things, math geeks use this in image processing to separate near from far objects as seen from moving cameras.

        Also useful when you have a plurality of unladen African swallows flying in a uniform direction with varying distances from the observer, or similar situations. https://www.youtube.com/watch?v=uio1J2PKzLI [youtube.com]

        --
        🌻🌻🌻🌻 [google.com]
        • (Score: 0) by Anonymous Coward on Wednesday April 08, @03:11AM

          by Anonymous Coward on Wednesday April 08, @03:11AM (#1439231)

          >... math geeks use this in image processing to separate near from far objects as seen from moving cameras.

          Sounds like a good opening for spoofing these "vision" systems. Just make a bunch of inflatables that look like normal things (cars, people on bikes, etc) that are scaled either up or down from normal sizes.

  • (Score: 1) by khallow on Tuesday April 07, @05:04PM (3 children)

    by khallow (3766) Subscriber Badge on Tuesday April 07, @05:04PM (#1439191) Journal

    "The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance," says Greg DeAngelis, [...] "But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world."

    However, if the motion you are looking for is minute - like a bird in a distant tree or a black figure in a night landscape, it's easier to see motion by staring at one spot until something moves.

    • (Score: 2) by JoeMerchant on Wednesday April 08, @01:10AM

      by JoeMerchant (3937) on Wednesday April 08, @01:10AM (#1439220)

      > if the motion you are looking for is minute - like a bird in a distant tree or a black figure in a night landscape, it's easier to see motion by staring at one spot until something moves.

      There are a lot of "differential circuits" in and immediately behind the retina, specifically tuned to fire for changes - so yeah, if you can be still and get a constant image on the detection plate, you'll catch motion a lot more easily from that state than if you're scanning all over.

      Although that scanning motion also produces "stream" data which can get compared against known streams - like listening to a melody and hearing a wrong note you can pick out the outlier from a dynamic time-series dataset as well - but that's not as sensitive to your "tiny differences" test.

      What's hard to wrap your head around is how compound eyes like flies have process their visual fields. They have a lot of sensitivity to changes, and they can do optical navigation - but... differently.

      --
      🌻🌻🌻🌻 [google.com]
    • (Score: 1, Interesting) by Anonymous Coward on Wednesday April 08, @03:16AM (1 child)

      by Anonymous Coward on Wednesday April 08, @03:16AM (#1439232)

      > it's easier to see motion by staring at one spot until something moves.

      Hmmm, I thought I read something (long ago) that the small, high frequency, eye motions perform something like a moving screen saver on an old CRT, keeping the retina elements from saturating (and then losing sensitivity)?

      • (Score: 1) by anubi on Wednesday April 08, @07:09AM

        by anubi (2828) on Wednesday April 08, @07:09AM (#1439242) Journal

        I got the idea it was much like the dithering we did on digitizers to increase the apparent resolution. We got several more bits by adding noise and statistical analysis .

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(1)