Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Friday May 19 2017, @10:55PM   Printer-friendly
from the oculus-barf-bag-accessory dept.

Focal surface displays mimic the way our eyes naturally focus at objects of varying depths. Rather than trying to add more and more focus areas to get the same degree of depth, this new approach changes the way light enters the display using spatial light modulators (SLMs) to bend the headset's focus around 3D objects—increasing depth and maximizing the amount of space represented simultaneously.

All of this adds up to improved image sharpness and a more natural viewing experience in VR.

"Quite frankly, one of the reasons this project ran as long as it did is that we did a bunch of things wrong the first time around," jokes Research Scientist Fix. "Manipulating focus isn't quite the same as modulating intensity or other more usual tasks in computational displays, and it took us a while to get to the correct mathematical formulation that finally brought everything together. Our overall motivation was to do things the 'right' way—solid engineering combined with the math and algorithms to back it up. We weren't going to be happy with something that only worked on paper or a hacked together prototype that didn't have any rigorous explanation of why it worked."

The paper (PDF).

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by takyon on Friday May 19 2017, @10:56PM (6 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday May 19 2017, @10:56PM (#512428) Journal

    It sounds like implementing a focal surface display will eliminate a major source of eye strain. Soon, any headset that does not have an adequate method of retinal blur correction will be garbage.

    focal surface displays: high resolution; narrow FOV; narrow eye box; wide DOF; yes eye tracking required; yes adaptive optics; yes content-dependent optimization; high image quality; near-correct retinal blur class

    [...] Field of view, eye box dimensions, and image quality depend on implementation choices: listed values correspond to the performance of prototypes in the cited publications, being indicative of current display technology limitations.

    [...] Today’s VR HMDs exhibit FOVs around 100 degrees with resolutions better than 5 cycles per degree (cpd). Emerging designs must ultimately support such specifications and beyond.

    Don't accept anything less than 180 degrees, preferably 210 [roadtovr.com] (comparison [roadtovr.com]). This needn't be computationally taxing since less detail could be rendered on the edges. And if a "focal surface display" already includes eye tracking in order to approximate retinal blur, the eye tracking could be used to render less detail at spots where the eyes are not pointed at (this "foveated rendering" [theverge.com] article includes an image right at the top that demonstrates the concept very well).

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by julian on Friday May 19 2017, @11:01PM

      by julian (6003) Subscriber Badge on Friday May 19 2017, @11:01PM (#512429)

      I'm hoping this makes VR usable for me. Right now a few minutes makes me feel like I've held my eyes crossed for hours. It causes a weird burning soreness.

      At least I don't get the nausea that some people report. I bet that'll be harder to fix.

    • (Score: 0) by Anonymous Coward on Friday May 19 2017, @11:06PM (4 children)

      by Anonymous Coward on Friday May 19 2017, @11:06PM (#512432)

      On the other side of the coin, as an inventor, you don't want early adopters. When you invent something in a stroke of insight, your first instinct might be to shout from the rooftops, look at this awesome thing I just invented. But if you can't explain how or why it works, or you can't even replicate the invention reliably, your early adopters will hate you. If it takes you 2~3 years to develop the theory behind your invention, everyone will hate your guts. You will become known as that idiot loser who can't do anything right.

      • (Score: 2) by takyon on Friday May 19 2017, @11:41PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday May 19 2017, @11:41PM (#512437) Journal

        That reminded me to check the EmDrive news.

        There's no proven EmDrive yet.
        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by Immerman on Saturday May 20 2017, @12:12AM (2 children)

        by Immerman (3985) on Saturday May 20 2017, @12:12AM (#512446)

        You're conflating two things: functionality and understanding.

        If you can build something that functions reliably, plenty of people will be happy to buy it. You don't have to be able to explain how it works - even if you could, they probably wouldn't understand (How many people actually understand any of the physics or information theory that makes a smartphone work? Most barely understand enough applied software skills to use the thing.)

        There are times when a solid theoretical understanding is necessary to make things function reliably, say - the notoriously unreliable cold fusion results, but it's actually lot less common than you'd think, especially if you're building upon existing technology or biology, in which case you can accomplish an incredible amount dealing with a bunch of "black boxes" that just need to be assembled or modified.

        Of course, without theory it can be difficult to impossible to get a patent, or to improve on your invention substantially, but that has nothing to do selling the things.

        • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @12:45AM (1 child)

          by Anonymous Coward on Saturday May 20 2017, @12:45AM (#512459)

          You forgot the worst possible scenario: the invention doesn't really do anything. It only looks like it works sometimes. When you finally produce a theory to explain why it works, you end up proving it can't work. When I was in high school, I did a science fair project which was useless according to a library book that I should have read. When I was in graduate school, I was utterly convinced of the validity of my research until I personally disproved the entire premise of my own final thesis. Those were just two of my many mistakes, and life has been one long exercise in retracting my own ideas. Only now am I accomplishing great things! .... or so I think. Years of bitter experience tell me everything I'm doing now is probably wrong.

          • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @01:44AM

            by Anonymous Coward on Saturday May 20 2017, @01:44AM (#512474)

            Often Wrong Soong, is that you, old man?

  • (Score: 2) by kaszz on Saturday May 20 2017, @12:11AM (2 children)

    by kaszz (4211) on Saturday May 20 2017, @12:11AM (#512445) Journal

    Anyone with a maths TD;LR how they accomplished this?
    And how computationally intensive is it? ie flops per image?

    • (Score: 2, Insightful) by Anonymous Coward on Saturday May 20 2017, @12:17AM

      by Anonymous Coward on Saturday May 20 2017, @12:17AM (#512449)

      Anyone with a maths TD;LR how they accomplished this?

      TD;LR: Too Dumb; Let's Read?

    • (Score: 4, Funny) by bob_super on Saturday May 20 2017, @12:19AM

      by bob_super (1357) on Saturday May 20 2017, @12:19AM (#512452)

      It depend on the direction of motion and whether you selected "A cups" or "DD cups".
      Hair rendering being very taxing, people with slower machines should select "shaved".

  • (Score: 2) by jmorris on Saturday May 20 2017, @12:20AM (3 children)

    by jmorris (4844) on Saturday May 20 2017, @12:20AM (#512454)

    Looking at the chart in the PDF I see two likely prospects and some dark horses.

    Their Focal Surface Display needs new cutting edge tech to pan out, be small enough for a head mount and cheap enough, quickly enough. None of these are impossible but a stall anywhere in that chain means it probably loses the race against Moore's Law to varifocal.

    Meanwhile varifocal needs three things, the eye tracker to know where you are focusing to be rock solid, but most require this part. Second it needs existing electric driven focus, i.e. autofocus tech, and it needs faster GPU performance. In other words, Moore's Law and software are sufficient to provide everything needed. Any solution that can be almost entirely implemented in software and perhaps faster chips is generally the way to bet. Plus you don't have to keep replacing the headsets, just push new drivers.

    The dark horses are lightfield and hologram, one of those could suddenly make a breakthrough but it is hard to even estimate the odds so not a good bet.

    • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @03:39AM

      by Anonymous Coward on Saturday May 20 2017, @03:39AM (#512517)

      Meanwhile varifocal needs three things, the eye tracker to know where you are focusing to be rock solid, but most require this part. Second it needs existing electric driven focus, i.e. autofocus tech, and it needs faster GPU performance. In other words, Moore's Law and software are sufficient to provide everything needed.

      There only a minor defect, easy to surgically fix - one needs just a refractory skull and a heat resistant brain, to continue functioning at the 120C the GPU heats up under the load imposed by software.

      Besides, the Moore law hasn't yet passed the Senate!

    • (Score: 2) by Immerman on Saturday May 20 2017, @04:51PM (1 child)

      by Immerman (3985) on Saturday May 20 2017, @04:51PM (#512647)

      > the eye tracker to know where you are focusing to be rock solid
      That's probably far more difficult than you would expect considering that the eye is constantly darting around at high speed. And it would have to reliably determine, for example, if you're focusing on the face of the person in front of you, or the distant building visible just past their ear. Get it wrong and you'll plague the user with nausea and blurriness.

      Meanwhile, from the video it sounds like this technique generates an approximate depth map across the rendered image, and inserts some sort of adjustable optical material between the image and user that varies the focal distance across the image, so that whatever your looking at is on (approximately) the right focal plane. Don't know that the focal distance would be greatly more accurate for my "past the ear" example, but at least it wouldn't be completely blurred out.

      • (Score: 2) by jmorris on Saturday May 20 2017, @06:40PM

        by jmorris (4844) on Saturday May 20 2017, @06:40PM (#512667)

        Doesn't need to know where you are looking, only at what depth. They could illuminate the retina with a dim pattern in IR and them image it, the auto focus setting needed to see it giving them the information they need. For that matter the retina itself is probably enough for a focus target. Remember that lenses work both directions, if the one in your eye is changing focus to look at different things it would require a sensor looking into the eye to follow that. And the current focal surface display is only trying for three levels, close, near, far so if you can detect better than that you can emulate far more in software than you can layer on extra planes in hardware because of the quickly escalating refresh rate problem.

        In the far future of course lightfields and holograms will rule the day. They are obviously superior in every way other than our current ability to implement them cheap, light and high resolution. Little details or decades long tease is hard to say. Remember how we were going to have terrabyte capacity holographic optical media? Several attempts have almost shipped something before going bankrupt..... still impractical twenty years on though. The promo materials are so old storing a terrabyte sounded really awesome!

  • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @02:25AM (1 child)

    by Anonymous Coward on Saturday May 20 2017, @02:25AM (#512485)

    Tried and the damn (autolaunched) xpdf crashed. Then tried again with wget and 403.

    I smell FB doesn't like downloaders, only browsers they can spy on. And once you are in the black list, forget it.

    • (Score: 2) by kaszz on Saturday May 20 2017, @11:09AM

      by kaszz (4211) on Saturday May 20 2017, @11:09AM (#512579) Journal

      I downloaded it. The catch is that the PDF shows up before it's completely downloaded so you have to wait until it's fully downloaded and interpreted which takes 5 minutes? before you start the "save as" operation. The output file should be 90 553 109 bytes and it's verified to work with xpdf but needs 28 MB normal memory and 21 MB resident memory. And yeah, it wasn't straightforward to save.

  • (Score: 0) by Anonymous Coward on Saturday May 20 2017, @08:45AM (1 child)

    by Anonymous Coward on Saturday May 20 2017, @08:45AM (#512552)

    as long as this is a Failbook company.

    • (Score: 2) by kaszz on Saturday May 20 2017, @11:17AM

      by kaszz (4211) on Saturday May 20 2017, @11:17AM (#512581) Journal

      Or learn to implement it yourself which enables you to tell farcebook to f-ck -ff ;)

(1)