Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday August 13 2015, @10:33AM   Printer-friendly
from the don't-let-the-smoke-out-of-the-chips dept.

Tom's Hardware conducted an interview with Palmer Luckey, the founder of Oculus VR. The defining takeaway? Virtual reality needs as much graphics resources as can be thrown at it:

Tom's Hardware: If there was one challenge in VR that you had to overcome that you really wish wasn't an issue, which would it be?

Palmer Luckey: Probably unlimited GPU horsepower. It is one of the issues in VR that cannot be solved at this time. We can make our hardware as good as we want, our optics as sharp as we can, but at the end of the day we are reliant on how many flops the GPU can push, how high a framerate can it push? Right now, to get 90 frames per second [the minimum target framerate for Oculus VR] and very low latencies we need heaps of power, and we need to bump the quality of the graphics way down.

If we had unlimited GPU horsepower in everybody's computer, that will make our lives very much easier. Of course, that's not something we can control, and it's a problem that will be solved in due time.

TH: Isn't it okay to deal with the limited power we have today, because we're still in the stepping stones of VR technology?

PL: It's not just about the graphics being simple. You can have lots of objects in the virtual environment, and it can still cripple the experience. Yes, we are able to make immersive games on VR with simpler graphics on this limited power, but the reality is that our ability to create what we are imagining is being limited by the limited GPU horsepower.

[...] The goal in the long run is not only to sell to people who buy game consoles, but also to people who buy mobile phones. You need to expand so that you can connect hundreds of millions of people to VR. It may not necessarily exist in the form of a phone dropping into a headset, but it will be mobile technologies -- mobile CPUs, mobile graphics cards, etc.

In the future, VR headsets are going to have all the render hardware on board, no longer being hardwired to a PC. A self-contained set of glasses is a whole other level of mainstream.

[More after the Break]

An article about AMD's VR hype/marketing at Gamescom 2015 lays out the "problem" of achieving "absolute immersion" in virtual reality:

Using [pixels per degree (PPD)], AMD calculated the resolution required as part of the recipe for truly immersive virtual reality. There are two parts of the vision to consider: there's the part of human vision that we can see in 3D, and beyond that is our peripheral vision. AMD's calculations take into account only the 3D segment. For good measure, you'd expand it further to include peripheral vision. Horizontally, humans have a 120-degree range of 3D sight, with peripheral vision expanding 30 degrees further each way, totaling 200 degrees of vision. Vertically, we are able to perceive up to 135 degrees in 3D.

With those numbers, and the resolution of the fovea (the most sensitive part of the eye), AMD calculated the required resolution. The fovea sees at about 60 PPD, which combined with 120 degrees of horizontal vision and 135 degrees of vertical vision, and multiplying that by two (because of two eyes) tallies up to a total of 116 megapixels. Yes, you read that right: 116 megapixels. The closest resolution by today's numbers is 16K, or around [132] megapixels.

While 90 Hz (albeit with reduced frame stuttering and minimal latency) is considered a starting point for VR, AMD ultimately wants to reach 200 Hz. Compare that to commercially available 2560×1440 @ 144 Hz monitors or HDMI 2.0 recently adding the ability to transport 3840×2160 @ 60 Hz. The 2016 consumer version of Oculus Rift will use two 1080×1200 panels, for a resolution of 2160×1200 refreshed at 90 Hz. That's over 233 million pixels per second. 116 megapixels times 200 Hz is 23.2 billion pixels per second. It's interesting (but no surprise) that AMD's endgame target for VR would require almost exactly one hundred times the graphics performance of the GPU powering the Rift, which recommends an NVIDIA GTX 970 or AMD Radeon R9 290.

In conclusion, today's consumer VR might deliver an experience that feels novel and worth $300+ to people. It might not make them queasy due to the use of higher framerates and innovations like virtual noses. But if you have the patience to wait for 15 years or so of early adopters to pay for stone/bronze age VR, you can achieve "absolute immersion," also known as enlightenment.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Redundant) by Anonymous Coward on Thursday August 13 2015, @11:26AM

    by Anonymous Coward on Thursday August 13 2015, @11:26AM (#222225)

    Eye resolution is not uniform. Couldn't you just follow the focus point, put more pixels there and interpolate the living shit out of everything outside that area?

    Starting Score:    0  points
    Moderation   -1  
       Redundant=1, Total=1
    Extra 'Redundant' Modifier   0  

    Total Score:   -1  
  • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @11:31AM

    by Anonymous Coward on Thursday August 13 2015, @11:31AM (#222226)

    Yes! Let's make Oculus VR work exactly like idTech 5! MegaInterpolate the living shit out of it! No one will complain that rendering quality has gone to shit these days! Truly we live in a bad future!!!

  • (Score: 2) by ledow on Thursday August 13 2015, @12:07PM

    by ledow (5567) on Thursday August 13 2015, @12:07PM (#222239) Homepage

    So rather than a single flat high res screen, you want a tiny high-res screen surrounded seamlessly by low-res screens that's capable of following some of the fastest movement a human being makes, right in front of the most sensitive instrument the human body has, in such a way that it can't detect the movement visibly down to the resolution of your tiny high-res screen?

    You just made a difficult problem even closer to impossible.

    • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @12:53PM

      by Anonymous Coward on Thursday August 13 2015, @12:53PM (#222256)

      No, "interpolate the living shit out" means there are physical pixels there. The point is that in the near future (~20 years) actually computing all the 116 million pixels at 200fps consumes more power than you can feasibly insert into ridiculously lightweight glasses that people want to actually wear and it might not be the smartest way to go.

      I highly doubt that you have required level of knowledge to throw that idea out without even attempting. Mostly, people don't have that even regarding way simpler issues like standard code optimizations, thus warranting the mantra, profile, profile, profile.

    • (Score: 1) by islisis on Thursday August 13 2015, @02:14PM

      by islisis (2901) on Thursday August 13 2015, @02:14PM (#222305) Homepage

      http://www.roadtovr.com/fove-eye-tracking-vr-headset-hands-on-ces-2015/ [roadtovr.com]
      https://en.wikipedia.org/wiki/Foveated_imaging [wikipedia.org]

      Eye tracking ought to be the next big target in input devices

    • (Score: 2) by acid andy on Thursday August 13 2015, @05:01PM

      by acid andy (1683) on Thursday August 13 2015, @05:01PM (#222397) Homepage Journal

      And right there you've just ended any hope of people sharing half decent screenshots or videos of their VR experience. The game review industry would be dead. I suppose for a screenshot a hotkey could momentarily force a full quality render of the whole scene but consider that a lot of gaming screenshot applications aren't natively supported by the game.

      Also, I can only see your idea working when the resolution in front of the eye's point of focus is much, much higher than what's currently available. On the current VR headsets you can easily make out individual pixels. When that's noticably blocky at the eye's point of focus, good luck reducing the res even further around it. I can see how it could work in principle with the kind of tech Palmer wants in the TFA.

      Also while I admire the ambition of striving for 90 fps, there are plenty of people loving VR at waaaay lower frame rates today. Once you get your VR legs it's not so bad.

      --
      Master of the science of the art of the science of art.
      • (Score: 2) by takyon on Thursday August 13 2015, @06:10PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday August 13 2015, @06:10PM (#222434) Journal

        Actually I like the anon's idea. There are two things to consider: screenshots and live streaming. Screenshots won't do the VR experience justice, and if you do need a screenshot, you can temporarily drop a hundred frames in order to capture 1 image. For the live streaming/twitch/youtube crowd, the framerate can be set lower, or the Twitch user can buy a better GPU with STREAMTUBE CROWDGOLD.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by Immerman on Thursday August 13 2015, @04:09PM

    by Immerman (3985) on Thursday August 13 2015, @04:09PM (#222367)

    A (potentially) excellent idea. Render a region around the focal point at maximum detail, and then a wide field of view at much much lower resolution and perform crude upscaling, potentially even on the display controller itself. Our eyes only have a high enough resolution for reading in an angular spot about 1-2 degrees across (the fovea). And I have no doubt that gaze tracking could be an awesome auxiliary input as well - seems like some of the fringe VR headsets are already using it to good effect.

    The problem though is that the fovea moves *very* quickly between resting spots (saccades). Up to 900 degrees/second in humans, with a typical resting time in the range of only 20-200ms (20-30 when reading, though under certain conditions it can be half that). If you've got a 10ms lag time in your eye-scanning -> final render pipeline that's likely going to be VERY obvious. And probably horribly nauseating as the world constantly "pops" from a blurry color field into a clear image.

    On the other hand, if we can get lag times down below, I don't know, maybe a single millisecond? Then we could potentially radically reduce the rendering overhead, maybe even to the point that it's almost within reach of current technology. But I suspect that kind of lag reduction is going to be brutal to achieve.

    If nothing else though, it does offer a tantalizing goal for the future - at some point, as resolution continues to improve and lag continues to fall, we'll hit a point where the necessary pixel-pushing horsepower suddenly falls by probably something around a thousandfold. And high-end VR will go from requiring a hefty high-performance rendering station to something that can be embedded in the headset itself in a single generation. And I suspect THEN we'll see VR really take off among the masses.

    • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @04:51PM

      by Anonymous Coward on Thursday August 13 2015, @04:51PM (#222391)

      Now _this_ is the answer I was looking for.

      My quick thoughts are:

      - If you go all the way with that idea you might be capable of obtaining a ridiculously low latency because the needed pixel count is quite low indeed. The catch is that that absolutely requires such latency in order to work at all. You'll might even need a display tech that can physically move the sharp spot around because 16K@0.5ms sounds like a pretty tough requirement no matter how low level is the implementation.

      - The opposite end is where you have render plenty of extra area with high resolution and even the rest at "sufficient" level. If you moved immediately from one end to other you'd see some blur but smaller motions wouldn't do that. This still needs maybe 1/10 of pixels of full rendering.

      Skipping focus-following, there's another hack: render to spheremap and use custom hardware very near the display to grid-warp it on fly with very high fps while detaching it from "actual" render rate. Maybe not that sensical at 60fps but could be something with higher rates.

      • (Score: 3, Interesting) by Immerman on Friday August 14 2015, @12:00AM

        by Immerman (3985) on Friday August 14 2015, @12:00AM (#222594)

        UPDATE: After typing all this out I remembered they changed axes when they introduced a 4K, so a 16K screen would only have 8x the linear pixels of 1080p, not 16x, and all the calculations I did would thus be for a 32K screen instead. I don't want to update the math, so instead I'm changing all references to 16K to 32K....

        I don't think there's actually much of a catch - I'm assuming you mean 32K in the "4K UHD TV format" sense, and sure, that would be a ridiculous amount of pixels to push every 0.5ms - what, 530M pixels, 2000 times a second? I want the hardware with that kind of pixel-pushing power!

        But there's a much easier way to do it, though we would need specialized screen-control electronics. Physically moving a sub-screen fast enough is unlikely to be viable thanks to that 900*/second saccade speed, but we don't need to thanks to the fact that current LCD refresh behavior is a historical artifact rather than a technological limitation: Let's say we have a physical 32K-equivalent display in the glasses, the trick is twofold:

        1) at the beginning of each frame we'd render the complete FOV at low resolution, *maybe* HD, and we could likely get away with even lower. Even if it's stretched across a 200*x100*FOV most of our retina has pretty lousy resolution, so it's a non-issue. Send that to the display, and have a simple ultra-low-lag upscaling circuit translate that low-resolution framebuffer to the entire display. Even linear interpolation would likely be more than good enough, we might even be able to get away with just giant square "superpixels". Either way, if it's implemented in the circuitry responsible for communicating the framebuffer with the display matrix it should incur minimal overhead.

        2) Then, multiple times per frame we render a much smaller "fovea-view" window at high resolution. None of the camera or scenery information is changed, we're literally just rendering a tiny portion of the exact same image at much higher resolution. If each eye gets a half-billion pixels evenly spread across the roughly 20,000 solid degrees covering the FOV, covering the maybe 4 solid degrees of fovea will require 5000x fewer pixels, or about 100,000 total, maybe a little 370x280 pixel block. We then send *just* this tiny updated patch of pixels to the display, and only that small sub-block of the display gets updated. This is a already a common feature in the control circuitry for e-ink displays where power-conservation is a high priority.

        That's it. If we render 20 fresh fovea-views per frame (2M pixels) plus the single HD "background" (2M pixels), we've still rendered and transmitted around 132x fewer pixels than needed to fully render the 32K display just once. If we assume a frame rate of 100 FPS, we've got a rate of 2000 "fovea views per second", or 0.5ms per update. Delivering an effectively 32K, 100FPS display at a pixel rate equivalent to driving a 1080p display at 200FPS.

    • (Score: 2) by SlimmPickens on Thursday August 13 2015, @10:08PM

      by SlimmPickens (1056) on Thursday August 13 2015, @10:08PM (#222553)

      I don't think that moving the scene in sync with the saccades would work. As I understand it, the whole point of the saccades is to drag 'feature detectors' across the scene. Feature detectors being ganglion specialised for detecting left/right movement, or light/dark edges etc.

      • (Score: 2) by Immerman on Friday August 14 2015, @12:51AM

        by Immerman (3985) on Friday August 14 2015, @12:51AM (#222608)

        Hmm, I remember it differently - that very little processing occurs *during* the saccade. That may only be on a conscious level though. Worth doing more research

        • (Score: 2) by SlimmPickens on Friday August 14 2015, @06:04AM

          by SlimmPickens (1056) on Friday August 14 2015, @06:04AM (#222707)

          You're right that very little processing occurs *during* the saccade (at least I think I read that in Jeff Hawkins' book). I just mean that the saccade is a functional thing and you probably don't want the VR to track the saccades just as the real world doesn't.

          • (Score: 2) by Immerman on Friday August 14 2015, @02:45PM

            by Immerman (3985) on Friday August 14 2015, @02:45PM (#222843)

            In that case I'm not sure I understand your objection. It's not like you'd be changing the view in any way in response to the saccades (that would probably cause issues), you'd just be re-rendering the exact same image at higher resolution in the small spot you're actually looking at. As I understand it even the saccade "target correction" mechanism is entirely internal with no reference to retinal input.