Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that to experience Virtual Reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset and a good, fast gaming-class PC.
This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 — 6 years has a "Realtime Motion Compensation" feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in realtime, thus giving you a hypersmooth 200 — 400 FPS/Hz image on the TV set, with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the realtime motion compensating cannot cost more than a few dollars in total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset.
Now suppose you have a entry level or mid-range GPU capable of pushing only 40 — 60 FPS in a VR application (or a measly 20 — 30 FPS per eye, making for a truly terrible VR experience). You could, in theory add some cheap Motion Compensation circuitry to that GPU and get 100 — 200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensating as a realtime GPU shader as the rest of the GPU is rendering a game or VR experience.
So my question: Why don't GPUs for VR use Realtime Motion Compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
(Score: 5, Insightful) by WillR on Thursday July 14 2016, @03:31PM
(Score: 2) by TheB on Thursday July 14 2016, @04:01PM
Exactly. "Realtime Motion Compensation" adds latency which contributes to motion sickness.
However if head movement prediction proves to be accurate enough, it could work well with RMC. Would still create artifacts with stereoscopic effect though.
(Score: 0) by Anonymous Coward on Thursday July 14 2016, @04:57PM
That and Realtime Motion Compensation is just hypnotic evil, brought to you by the same people that brought you burn-out-your-eyesocket-blue LEDs. :P
(Score: 1) by kurenai.tsubasa on Thursday July 14 2016, @06:36PM
I wasn't sure if it was just an older model and maybe it's gotten down to only a few hundred ms delay (even that would be bad for VR), but a roommate had one of those TVs for a while. The lag was absolutely horrible, like 2 or 3 seconds—impossible to play any games on it. Come to think of it, the TV in the conference room is like that as well. Move the mouse, wait, 3 seconds later the pointer moves.
(Score: 2) by vux984 on Thursday July 14 2016, @07:27PM
I wasn't sure if it was just an older model and maybe it's gotten down to only a few hundred ms delay (even that would be bad for VR),
The bottom line is that you want to display a frame as soon as you get it to minimize lag. Any solution that adds interpolating frames means a frame is going to arrive at the TV, and then it's going to sit on it while it creates the interpolated frames between it and the previous frame, which means you aren't displaying the new frame when you get it; you are deliberately holding it back for a few frames.
Now if you had 'predictive motion compensation' that used the previous 2 frames to guess the next ones (and it actually worked well), then you could potentially display a predicted frame while you waited for the next real one to arrive.
But anything that does frame interpolation is a BAD THING for latency.
(Score: 2) by hendrikboom on Thursday July 14 2016, @11:44PM
So you want extrapolation instead of interpolation.
(Score: 3, Informative) by vux984 on Thursday July 14 2016, @05:49PM
Correct its latency, but the is issue isn't quite as you explain it.
Its not so much that a real frame is coming only 1/20th second that is the problem; the issue is that motion compensation is a 'back fill'.
If I have real-frame A, and real-frame B separated by 1/20th of a second. And you want to add smoothing frame A' and A'' between A and B that's great... but you can't generate A' and A'' until you have frame B.
That is the latency problem. You have to wait for B, before you can generate A' and A'', and you have to display A' and A'' , all before finally displaying frame B. So you have to sit on B while doing that.
Thus any system that does motion interpolation is basically running a few extra frames behind when it actually received a frame. In VR and gaming in general, you always want to display a frame as soon as you can. Hell, even just for normal PC use, the latency added by such processing makes mouse movements noticeably lag.