Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday July 14 2016, @03:25PM   Printer-friendly
from the now-there's-a-thought dept.

Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that to experience Virtual Reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset and a good, fast gaming-class PC.

This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 — 6 years has a "Realtime Motion Compensation" feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in realtime, thus giving you a hypersmooth 200 — 400 FPS/Hz image on the TV set, with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the realtime motion compensating cannot cost more than a few dollars in total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset.

Now suppose you have a entry level or mid-range GPU capable of pushing only 40 — 60 FPS in a VR application (or a measly 20 — 30 FPS per eye, making for a truly terrible VR experience). You could, in theory add some cheap Motion Compensation circuitry to that GPU and get 100 — 200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensating as a realtime GPU shader as the rest of the GPU is rendering a game or VR experience.

So my question: Why don't GPUs for VR use Realtime Motion Compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by wonkey_monkey on Thursday July 14 2016, @08:41PM

    by wonkey_monkey (279) on Thursday July 14 2016, @08:41PM (#374502) Homepage

    This raises an interesting question:

    Well, no offence, but it raises an ignorant question. You don't understand VR or motion compensation well enough to see why it wouldn't work.

    Firstly: motion compensation isn't really that great. It's not perfect, and its imperfections (any moderately fast movement over a detailed background, for example) look terrible. There are people uploading 60fps motion compensated movie trailers all over YouTube, and they all look pretty awful, and (in the case of films which actually were shot in a high framerate) give a very poor impression of the true look. Every other frame - especially in action scenes - becomes a mangled mess. Even in sedate scenes, it can look "off" where real, uncompensated high frame rate footage would not.

    with no visible stutter or strobing whatsoever

    Just not true.

    Second: in order to motion compensate, you need to know the next frame. That means you're always a bit behind. This is why TVs have "game modes" which disable (among other things) motion compensation. On my TV it reduces latency from around 120ms to about 40ms. Still not ideal - games feel much "quicker" and smoother on a CRT, still - but better than before.

    However, there is something VR can do. If you rotate your head, and the GPU isn't quite keeping up just at that moment, instead of stutering and repeating the previous frame verbatim, it can warp the previous images to match your new viewpoint to give you a quicker update on the new view, while it gets a proper rendered frame ready. It's called asynchronous timewarp.

    At least I think that's what it does. Another idea might be to render each frame slightly bigger than needed, then the warping can be applied at the last moment to better match the current head position.

    --
    systemd is Roko's Basilisk
    Starting Score:    1  point
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4