Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Friday May 19 2017, @10:55PM   Printer-friendly
from the oculus-barf-bag-accessory dept.

Focal surface displays mimic the way our eyes naturally focus at objects of varying depths. Rather than trying to add more and more focus areas to get the same degree of depth, this new approach changes the way light enters the display using spatial light modulators (SLMs) to bend the headset's focus around 3D objects—increasing depth and maximizing the amount of space represented simultaneously.

All of this adds up to improved image sharpness and a more natural viewing experience in VR.

"Quite frankly, one of the reasons this project ran as long as it did is that we did a bunch of things wrong the first time around," jokes Research Scientist Fix. "Manipulating focus isn't quite the same as modulating intensity or other more usual tasks in computational displays, and it took us a while to get to the correct mathematical formulation that finally brought everything together. Our overall motivation was to do things the 'right' way—solid engineering combined with the math and algorithms to back it up. We weren't going to be happy with something that only worked on paper or a hacked together prototype that didn't have any rigorous explanation of why it worked."

The paper (PDF).

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by jmorris on Saturday May 20 2017, @12:20AM (3 children)

    by jmorris (4844) on Saturday May 20 2017, @12:20AM (#512454)

    Looking at the chart in the PDF I see two likely prospects and some dark horses.

    Their Focal Surface Display needs new cutting edge tech to pan out, be small enough for a head mount and cheap enough, quickly enough. None of these are impossible but a stall anywhere in that chain means it probably loses the race against Moore's Law to varifocal.

    Meanwhile varifocal needs three things, the eye tracker to know where you are focusing to be rock solid, but most require this part. Second it needs existing electric driven focus, i.e. autofocus tech, and it needs faster GPU performance. In other words, Moore's Law and software are sufficient to provide everything needed. Any solution that can be almost entirely implemented in software and perhaps faster chips is generally the way to bet. Plus you don't have to keep replacing the headsets, just push new drivers.

    The dark horses are lightfield and hologram, one of those could suddenly make a breakthrough but it is hard to even estimate the odds so not a good bet.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @03:39AM

    by Anonymous Coward on Saturday May 20 2017, @03:39AM (#512517)

    Meanwhile varifocal needs three things, the eye tracker to know where you are focusing to be rock solid, but most require this part. Second it needs existing electric driven focus, i.e. autofocus tech, and it needs faster GPU performance. In other words, Moore's Law and software are sufficient to provide everything needed.

    There only a minor defect, easy to surgically fix - one needs just a refractory skull and a heat resistant brain, to continue functioning at the 120C the GPU heats up under the load imposed by software.

    Besides, the Moore law hasn't yet passed the Senate!

  • (Score: 2) by Immerman on Saturday May 20 2017, @04:51PM (1 child)

    by Immerman (3985) on Saturday May 20 2017, @04:51PM (#512647)

    > the eye tracker to know where you are focusing to be rock solid
    That's probably far more difficult than you would expect considering that the eye is constantly darting around at high speed. And it would have to reliably determine, for example, if you're focusing on the face of the person in front of you, or the distant building visible just past their ear. Get it wrong and you'll plague the user with nausea and blurriness.

    Meanwhile, from the video it sounds like this technique generates an approximate depth map across the rendered image, and inserts some sort of adjustable optical material between the image and user that varies the focal distance across the image, so that whatever your looking at is on (approximately) the right focal plane. Don't know that the focal distance would be greatly more accurate for my "past the ear" example, but at least it wouldn't be completely blurred out.

    • (Score: 2) by jmorris on Saturday May 20 2017, @06:40PM

      by jmorris (4844) on Saturday May 20 2017, @06:40PM (#512667)

      Doesn't need to know where you are looking, only at what depth. They could illuminate the retina with a dim pattern in IR and them image it, the auto focus setting needed to see it giving them the information they need. For that matter the retina itself is probably enough for a focus target. Remember that lenses work both directions, if the one in your eye is changing focus to look at different things it would require a sensor looking into the eye to follow that. And the current focal surface display is only trying for three levels, close, near, far so if you can detect better than that you can emulate far more in software than you can layer on extra planes in hardware because of the quickly escalating refresh rate problem.

      In the far future of course lightfields and holograms will rule the day. They are obviously superior in every way other than our current ability to implement them cheap, light and high resolution. Little details or decades long tease is hard to say. Remember how we were going to have terrabyte capacity holographic optical media? Several attempts have almost shipped something before going bankrupt..... still impractical twenty years on though. The promo materials are so old storing a terrabyte sounded really awesome!