Graphics cards manufacturers like Nvidia and AMD have gone to great pains recently to point out that to experience Virtual Reality with a VR headset properly, you need a GPU capable of pushing at least a steady 90 FPS per eye, or a total of at least 180 FPS for both eyes, and at high resolutions to boot. This of course requires the purchase of the latest, greatest high-end GPUs made by these manufacturers, alongside the money you are already plonking down for your new VR headset and a good, fast gaming-class PC.
This raises an interesting question: virtually every LCD/LED TV manufactured in the last 5 — 6 years has a "Realtime Motion Compensation" feature built in. This is the not-so-new-at-all technique of taking, say, a football match broadcast live at 30 FPS or Hz, and algorithmically generating extra in-between frames in realtime, thus giving you a hypersmooth 200 — 400 FPS/Hz image on the TV set, with no visible stutter or strobing whatsoever. This technology is not new. It is cheap enough to include in virtually every TV set at every price level (thus the hardware that performs the realtime motion compensating cannot cost more than a few dollars in total). And the technique should, in theory, work just fine with the output of a GPU trying to drive a VR headset.
Now suppose you have a entry level or mid-range GPU capable of pushing only 40 — 60 FPS in a VR application (or a measly 20 — 30 FPS per eye, making for a truly terrible VR experience). You could, in theory add some cheap Motion Compensation circuitry to that GPU and get 100 — 200 FPS or more per eye. Heck, you might even be able to program a few GPU cores to run the motion compensating as a realtime GPU shader as the rest of the GPU is rendering a game or VR experience.
So my question: Why don't GPUs for VR use Realtime Motion Compensation techniques to increase the FPS pushed into the VR headset? Would this not make far more financial sense for the average VR user than having to buy a monstrously powerful GPU to experience VR at all?
(Score: 5, Insightful) by WillR on Thursday July 14 2016, @03:31PM
(Score: 2) by TheB on Thursday July 14 2016, @04:01PM
Exactly. "Realtime Motion Compensation" adds latency which contributes to motion sickness.
However if head movement prediction proves to be accurate enough, it could work well with RMC. Would still create artifacts with stereoscopic effect though.
(Score: 0) by Anonymous Coward on Thursday July 14 2016, @04:57PM
That and Realtime Motion Compensation is just hypnotic evil, brought to you by the same people that brought you burn-out-your-eyesocket-blue LEDs. :P
(Score: 1) by kurenai.tsubasa on Thursday July 14 2016, @06:36PM
I wasn't sure if it was just an older model and maybe it's gotten down to only a few hundred ms delay (even that would be bad for VR), but a roommate had one of those TVs for a while. The lag was absolutely horrible, like 2 or 3 seconds—impossible to play any games on it. Come to think of it, the TV in the conference room is like that as well. Move the mouse, wait, 3 seconds later the pointer moves.
(Score: 2) by vux984 on Thursday July 14 2016, @07:27PM
I wasn't sure if it was just an older model and maybe it's gotten down to only a few hundred ms delay (even that would be bad for VR),
The bottom line is that you want to display a frame as soon as you get it to minimize lag. Any solution that adds interpolating frames means a frame is going to arrive at the TV, and then it's going to sit on it while it creates the interpolated frames between it and the previous frame, which means you aren't displaying the new frame when you get it; you are deliberately holding it back for a few frames.
Now if you had 'predictive motion compensation' that used the previous 2 frames to guess the next ones (and it actually worked well), then you could potentially display a predicted frame while you waited for the next real one to arrive.
But anything that does frame interpolation is a BAD THING for latency.
(Score: 2) by hendrikboom on Thursday July 14 2016, @11:44PM
So you want extrapolation instead of interpolation.
(Score: 3, Informative) by vux984 on Thursday July 14 2016, @05:49PM
Correct its latency, but the is issue isn't quite as you explain it.
Its not so much that a real frame is coming only 1/20th second that is the problem; the issue is that motion compensation is a 'back fill'.
If I have real-frame A, and real-frame B separated by 1/20th of a second. And you want to add smoothing frame A' and A'' between A and B that's great... but you can't generate A' and A'' until you have frame B.
That is the latency problem. You have to wait for B, before you can generate A' and A'', and you have to display A' and A'' , all before finally displaying frame B. So you have to sit on B while doing that.
Thus any system that does motion interpolation is basically running a few extra frames behind when it actually received a frame. In VR and gaming in general, you always want to display a frame as soon as you can. Hell, even just for normal PC use, the latency added by such processing makes mouse movements noticeably lag.
(Score: 5, Insightful) by Anonymous Coward on Thursday July 14 2016, @03:39PM
It's because in order to interpolate frames, you have to have a second frame, then interpolate, which gives you latency. And latency in VR doesn't work. At all.
(Score: 1, Interesting) by Anonymous Coward on Thursday July 14 2016, @08:12PM
As stated, you cannot wait for the next image to compute an intermediate image, as that introduce enough latency to make you sick.
But they use a similar idea with Asynchronous Time Warp...
If the GPU cannot render the next image fast enough, they use the previous image and stretch it and warp it so that it looks like the camera was moved to the new viewpoint. This is possible because your viewpoint won't move by a large amount in 1/90s.
However if the GPU cannot churn out the following image again, you are going to have stutter, and start feeling funny in your stomach.
As a side note, the GTX 1060 has made VR much more affordable. It has more than the required power to run current VR games, and when engines start to implement Single Multi-Projection rendering (rending one scene with 2 cameras in one pass), it will be even better.
(Score: 2) by Tork on Thursday July 14 2016, @04:11PM
Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
(Score: 2) by wonkey_monkey on Thursday July 14 2016, @08:57PM
Motion compensation does not work in stereo. You'll get different images per-eye.
It's a bit much to say it plain "does not work." It won't be perfect, just as it is far from perfect in "mono". And the problems will be compounded by the need for stereo correspondence.
So yeah, it'll suck, and it's a stupid idea for other reasons, but it won't fail completely.
systemd is Roko's Basilisk
(Score: 0) by Anonymous Coward on Thursday July 14 2016, @04:15PM
You want 90 FPS per eye, is not to me, total 180FPS, but still 90FPS. Since each eye is only using 1/2 the screen. Why 2 screens, one for each eye, where Goggle Cardboard showed you can do it on 1. Even if you did have two screen, like a google glass for each eye, split the signal 1/2 for each eye.
So 1/2 1920 x 1080 would be 960 x 1080 for each eye.
(Score: 2, Informative) by fishybell on Thursday July 14 2016, @04:38PM
It's not that they render on two screens (I'm fairly certain none do), it's that they have to render two scenes. The different scene seen in each eye is exactly as complicated to render on the same screen as it would be to render on two different screens. The extra scene can also be more complicated than just rendering the same scene with twice the resolution.
(Score: 2) by takyon on Thursday July 14 2016, @11:08PM
That's just not true anymore. For example:
http://wccftech.com/pc-vrps-vr-gap-bigger-nvidias-pascal/ [wccftech.com]
There are other such techniques that are being used by both AMD and Nvidia to bring down the amount of computation needed to do VR.
And, yes, some or most of the VR devices have two literal screens inside them. This is most obvious with stuff like StarVR, where the screens are angled to provide a wide field of view.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Tork on Friday July 15 2016, @12:07AM
The different scene seen in each eye is exactly as complicated to render on the same screen as it would be to render on two different screens.
Weeeeelllll.... sort of. I'm gonna be a little pedantic here. You are correct that they have to render two screens, but I want to nitpick the statement that each eye is exactly as complicated to render. You are right that it will have roughly the same number of vertices, polygons, texels to fill, etc. But there is one key difference: The deformation of the models, the placement of effects like particles, placement of the characters, etc, still only need to be calculated once. In other words, a very significant portion of the work does not actually need to be done twice. Just reuse that data and slide the camera over and render one more time. In practical terms, what I'm saying is that you could have a game that renders at exactly 60.000 frames per second, but a stereo render of it may not actually be exactly half that. It could end up being 49.234 fps.
It all really depends on what parts of the game actually generate significant latency.
Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
(Score: 0) by Anonymous Coward on Thursday July 14 2016, @05:03PM
From a signal bandwidth point of view, there is no difference between doubling the image size or doubling the framerate.
(Score: 3, Informative) by damnbunni on Thursday July 14 2016, @05:19PM
Try playing a plain ol' video game on a TV with motion compensation turned on.
Then you'll understand why it can't be used for VR.
(Score: 3, Informative) by Noldir on Thursday July 14 2016, @05:20PM
http://www.slashdot.org/story/313655 [slashdot.org]
Still, not lagging far behind!
(Score: 3, Insightful) by janrinok on Thursday July 14 2016, @05:31PM
How would we know? - most of us don't visit the other site anymore. We created this so that we didn't have to go to /.
(Score: 3, Informative) by Zz9zZ on Thursday July 14 2016, @07:13PM
Right?
I went there recently, the generic bulk comments were worse than the troll comments we get here! Also, the interspersed ads just make me ill. Hint to marketers: if you have to try and force users to see your ads, then you're doing it wrong!
~Tilting at windmills~
(Score: 2) by wonkey_monkey on Thursday July 14 2016, @09:15PM
How would we know? - most of us don't visit the other site anymore.
I never got asked to participate in that survey :(
systemd is Roko's Basilisk
(Score: 3, Funny) by wonkey_monkey on Thursday July 14 2016, @08:59PM
Still, not lagging far behind!
That's because we motion compensate our stories for smoothness.
systemd is Roko's Basilisk
(Score: 2) by Open4D on Thursday July 21 2016, @12:06AM
If anyone's interested, 2 years ago I did a highly unscientific survey comparing the two sites.
https://soylentnews.org/~TheRaven/journal/232#comment_sub_22004 [soylentnews.org]
(I started a thread on this user's journal entry and subsequently posted 13 additional comments to it.)
My final comment [soylentnews.org] contained this summary: "So out of all the stories posted on both sites, Slashdot probably gets about 55-60% of them first, and Soylent probably gets about 40-45% of them first - that'd be my guess."
(Score: 4, Funny) by DeathMonkey on Thursday July 14 2016, @05:41PM
Why Don't Graphics Cards for VR Use Motion Compensation Technology?
NO
Wait, am I doing this right?
(Score: 2) by wonkey_monkey on Thursday July 14 2016, @09:22PM
Actually, yeah. It's such a terrible idea - fundamentally incompatible with what VR has to do, in fact - that your answer is pretty much perfect.
systemd is Roko's Basilisk
(Score: 1, Funny) by Anonymous Coward on Thursday July 14 2016, @08:02PM
I'll tell you the answer tomorrow...after I've done some interpolating.
(Score: 4, Interesting) by wonkey_monkey on Thursday July 14 2016, @08:41PM
This raises an interesting question:
Well, no offence, but it raises an ignorant question. You don't understand VR or motion compensation well enough to see why it wouldn't work.
Firstly: motion compensation isn't really that great. It's not perfect, and its imperfections (any moderately fast movement over a detailed background, for example) look terrible. There are people uploading 60fps motion compensated movie trailers all over YouTube, and they all look pretty awful, and (in the case of films which actually were shot in a high framerate) give a very poor impression of the true look. Every other frame - especially in action scenes - becomes a mangled mess. Even in sedate scenes, it can look "off" where real, uncompensated high frame rate footage would not.
with no visible stutter or strobing whatsoever
Just not true.
Second: in order to motion compensate, you need to know the next frame. That means you're always a bit behind. This is why TVs have "game modes" which disable (among other things) motion compensation. On my TV it reduces latency from around 120ms to about 40ms. Still not ideal - games feel much "quicker" and smoother on a CRT, still - but better than before.
However, there is something VR can do. If you rotate your head, and the GPU isn't quite keeping up just at that moment, instead of stutering and repeating the previous frame verbatim, it can warp the previous images to match your new viewpoint to give you a quicker update on the new view, while it gets a proper rendered frame ready. It's called asynchronous timewarp.
At least I think that's what it does. Another idea might be to render each frame slightly bigger than needed, then the warping can be applied at the last moment to better match the current head position.
systemd is Roko's Basilisk
(Score: 2) by Whoever on Friday July 15 2016, @02:25AM
Is this for dogs, or humans? Humans cannot see changes at anything like that rate. It's a waste of electrons.
(Score: 2) by wonkey_monkey on Friday July 15 2016, @08:33AM
Humans cannot see changes at anything like that rate.
That's rather the point. You don't want them to see changes at all.
As smooth as 60fps might look to you on a TV screen, once you're trying to fool the brain into thinking it's inside a real environment, 90-120fps looks more realistic.
systemd is Roko's Basilisk
(Score: 0) by Anonymous Coward on Friday July 15 2016, @03:40PM
Also, 60Hz is visible to many humans. 72Hz, fewer; 75Hz, fewer, 80Hz, not very many, 90Hz, I've not yet met someone who expressed being bothered by it.
60Hz is absolutely, absolutely vertigo-inducing. In North America power systems, lots of folks get headaches from the cheapest fluorescent lights with bad ballasts because the underlying 60Hz AC leaks through.
(Score: 2) by wonkey_monkey on Friday July 15 2016, @03:56PM
Yeah, I just thought of this: if you wave your hand back and forth in front of a 60Hz CRT, you'll be able to see a strobe effect. That won't happen if a) you up the framerate or b) you up the persistance of the image (but then you run the risk of visible blurring).
systemd is Roko's Basilisk
(Score: 1) by anubi on Friday July 15 2016, @04:53AM
I am a little late on this topic, but while we are discussing displays and interpolation, does anyone know if a modern VGA monitor/display will fallback and display the old-school monochrome MDA images if I connect the sync and video lines up properly? ( Note the horizontal sync on those old babies was only 15,750 Hz or so, nowhere even close to a modern display ).
I still have and support some legacy monochrome systems, and would love to toss the old monochrome CRT's and use a modern display.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 1) by gOnZo on Friday July 15 2016, @11:59AM
I doubt it, though you might get lucky and find one that does. Bottom line, if there is a way to save a buck by dropping 'features' that very (VERY) few people use, especially when it is a retrograde feature, manufacturers opt for saving the buck. There are forces at work too, but does it make sense for Joe Consumer to purchase a brand new color VGA monitor, and then connect it to their old mono graphic video card? Not much of a selling point.
I'm just amazed you can still get 5.25" floppy drives to work...
...or has calcification now re-classified these as 'hard' drives?
(Score: 1) by gOnZo on Friday July 15 2016, @12:27PM
The DVI limitation he refers to means that you have to ensure the monitor has either the standard HD DB15 VGA connector, OR a DVI-A (analog capable DVI connector)+ a DVI-to-VGA adapter (purely mechanical=cheap).
(Score: 1) by anubi on Saturday July 16 2016, @05:03AM
Thanks for the YouTube links. I had seen those offboard converter boxes, however I figured while someone had a frame scaler running where it would take every VGA mode out there and scale it to the particular LCD they used, and the new LCD's having far more resolution than the old MDA, they would just throw it in. At least the edges of the sync pulses are consistent, and any positioning of the video content ( phasing to sync ) would be done by the customer, such setup saved like any other setup.
Kinda like playing an old retro Atari game on a modern PC... but in my case its old stuff like CAM machines and several of my old DOS tools ran a debug screen on mono while the program itself wrote to VGA.
I am loathe to part with my old DOS stuff, as I consider at least I understand and trust my old stuff far more than I trust this new stuff that comes pre-loaded with malware I cannot remove. It may be like comparing a bike to a car, but if the government comes in and forces cars to be licensed, but the bikes are not, then someone else has control over my ride if I choose a car - then the use of the car denied me if I fail to do something someone else wants me to do.
I already see strong economic forces at work, working with my government to shield them from lawsuits should they decide to use their computing systems to enforce their business model, while holding me as a criminal if I work around it. Most of the stuff I do, I do not need pretty pictures or CPU intensive stuff... rather most of it is quite simple robotics type stuff.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 1) by gOnZo on Friday July 15 2016, @12:36PM
I'm not current on trends in VR tech. It is possible this is already implemented in the current VR equipment, but I don't think so, since I've not heard of them using eye-tracking tech.
There was talk about using the physiology of the human eye to effectively boost frame rates.
You incorporate eye-tracking in the VR headset, then only render the area of the screen that is focused on the fovea of the eye at maximum resolution.
The area of the screen NOT centered on the fovea can be rendered at much lower resolution, since the eye only uses peripheral areas for motion detection.
This approach still requires additional hardware, but will not suffer from latency issues.