Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that to experience Virtual Reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset and a good, fast gaming-class PC.
This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 — 6 years has a "Realtime Motion Compensation" feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in realtime, thus giving you a hypersmooth 200 — 400 FPS/Hz image on the TV set, with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the realtime motion compensating cannot cost more than a few dollars in total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset.
Now suppose you have a entry level or mid-range GPU capable of pushing only 40 — 60 FPS in a VR application (or a measly 20 — 30 FPS per eye, making for a truly terrible VR experience). You could, in theory add some cheap Motion Compensation circuitry to that GPU and get 100 — 200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensating as a realtime GPU shader as the rest of the GPU is rendering a game or VR experience.
So my question: Why don't GPUs for VR use Realtime Motion Compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
(Score: 1, Interesting) by Anonymous Coward on Thursday July 14 2016, @08:12PM
As stated, you cannot wait for the next image to compute an intermediate image, as that introduce enough latency to make you sick.
But they use a similar idea with Asynchronous Time Warp...
If the GPU cannot render the next image fast enough, they use the previous image and stretch it and warp it so that it looks like the camera was moved to the new viewpoint. This is possible because your viewpoint won't move by a large amount in 1/90s.
However if the GPU cannot churn out the following image again, you are going to have stutter, and start feeling funny in your stomach.
As a side note, the GTX 1060 has made VR much more affordable. It has more than the required power to run current VR games, and when engines start to implement Single Multi-Projection rendering (rending one scene with 2 cameras in one pass), it will be even better.