Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday July 14 2016, @03:25PM   Printer-friendly
from the now-there's-a-thought dept.

Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that to experience Virtual Reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset and a good, fast gaming-class PC.

This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 — 6 years has a "Realtime Motion Compensation" feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in realtime, thus giving you a hypersmooth 200 — 400 FPS/Hz image on the TV set, with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the realtime motion compensating cannot cost more than a few dollars in total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset.

Now suppose you have a entry level or mid-range GPU capable of pushing only 40 — 60 FPS in a VR application (or a measly 20 — 30 FPS per eye, making for a truly terrible VR experience). You could, in theory add some cheap Motion Compensation circuitry to that GPU and get 100 — 200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensating as a realtime GPU shader as the rest of the GPU is rendering a game or VR experience.

So my question: Why don't GPUs for VR use Realtime Motion Compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday July 14 2016, @04:15PM

    by Anonymous Coward on Thursday July 14 2016, @04:15PM (#374398)

    You want 90 FPS per eye, is not to me, total 180FPS, but still 90FPS. Since each eye is only using 1/2 the screen. Why 2 screens, one for each eye, where Goggle Cardboard showed you can do it on 1. Even if you did have two screen, like a google glass for each eye, split the signal 1/2 for each eye.

    So 1/2 1920 x 1080 would be 960 x 1080 for each eye.

  • (Score: 2, Informative) by fishybell on Thursday July 14 2016, @04:38PM

    by fishybell (3156) on Thursday July 14 2016, @04:38PM (#374409)

    It's not that they render on two screens (I'm fairly certain none do), it's that they have to render two scenes. The different scene seen in each eye is exactly as complicated to render on the same screen as it would be to render on two different screens. The extra scene can also be more complicated than just rendering the same scene with twice the resolution.

    • (Score: 2) by takyon on Thursday July 14 2016, @11:08PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday July 14 2016, @11:08PM (#374550) Journal

      The different scene seen in each eye is exactly as complicated to render on the same screen as it would be to render on two different screens. The extra scene can also be more complicated than just rendering the same scene with twice the resolution.

      That's just not true anymore. For example:

      http://wccftech.com/pc-vrps-vr-gap-bigger-nvidias-pascal/ [wccftech.com]

      The GTX 1080 & GTX 1070 GPUs are not only more powerful, they also come with specific VR optimizations. Let’s start with the basics: Virtual Reality still works on the principle of stereoscopy, meaning that two images (one for each eye) need to be rendered thus requiring twice the power. That always struck me as a huge limitation.

      NVIDIA’s Pascal cards will fix this longstanding issue thanks to a technology called Single Pass Stereo.

      Single Pass Stereo turbocharges geometry performance by allowing the head-mounted display’s left and right displays to share a single geometry pass. We’re effectively halving the workload of traditional VR rendering, which requires the GPU to draw geometry twice — once for the left eye and once for the right eye.

      This improvement is especially important for geometry-heavy scenes, and those featuring significant levels of tessellation, which remains the most effective way of adding real detail to objects and surfaces in VR.

      With tessellation, affected game elements can be accurately lit, shadowed and shaded, and can be examined up close in Virtual Reality. With other solutions, such as Bump Mapping or Parallax Occlusion Mapping, the simulation of geometric detail breaks down when the player approaches or examines affected objects from any angle, which harms immersion. By increasing geometry performance and tessellation by up to 2x, developers are able to add more detail that players can examine up close, significantly improving the look of the game and the player’s level of presence.

      NVIDIA also developed another technology for Pascal cards called Lens Matched Shading that will help with the performance, building upon the Multi-Res Shading introduced with Maxwell.

      Lens Matched Shading increases pixel shading performance by rendering more natively to the unique dimensions of VR display output. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset.

      There are other such techniques that are being used by both AMD and Nvidia to bring down the amount of computation needed to do VR.

      And, yes, some or most of the VR devices have two literal screens inside them. This is most obvious with stuff like StarVR, where the screens are angled to provide a wide field of view.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Tork on Friday July 15 2016, @12:07AM

      by Tork (3914) on Friday July 15 2016, @12:07AM (#374573)

      The different scene seen in each eye is exactly as complicated to render on the same screen as it would be to render on two different screens.

      Weeeeelllll.... sort of. I'm gonna be a little pedantic here. You are correct that they have to render two screens, but I want to nitpick the statement that each eye is exactly as complicated to render. You are right that it will have roughly the same number of vertices, polygons, texels to fill, etc. But there is one key difference: The deformation of the models, the placement of effects like particles, placement of the characters, etc, still only need to be calculated once. In other words, a very significant portion of the work does not actually need to be done twice. Just reuse that data and slide the camera over and render one more time. In practical terms, what I'm saying is that you could have a game that renders at exactly 60.000 frames per second, but a stereo render of it may not actually be exactly half that. It could end up being 49.234 fps.

      It all really depends on what parts of the game actually generate significant latency.

      --
      Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
  • (Score: 0) by Anonymous Coward on Thursday July 14 2016, @05:03PM

    by Anonymous Coward on Thursday July 14 2016, @05:03PM (#374419)

    From a signal bandwidth point of view, there is no difference between doubling the image size or doubling the framerate.