Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday August 13 2015, @10:33AM   Printer-friendly
from the don't-let-the-smoke-out-of-the-chips dept.

Tom's Hardware conducted an interview with Palmer Luckey, the founder of Oculus VR. The defining takeaway? Virtual reality needs as much graphics resources as can be thrown at it:

Tom's Hardware: If there was one challenge in VR that you had to overcome that you really wish wasn't an issue, which would it be?

Palmer Luckey: Probably unlimited GPU horsepower. It is one of the issues in VR that cannot be solved at this time. We can make our hardware as good as we want, our optics as sharp as we can, but at the end of the day we are reliant on how many flops the GPU can push, how high a framerate can it push? Right now, to get 90 frames per second [the minimum target framerate for Oculus VR] and very low latencies we need heaps of power, and we need to bump the quality of the graphics way down.

If we had unlimited GPU horsepower in everybody's computer, that will make our lives very much easier. Of course, that's not something we can control, and it's a problem that will be solved in due time.

TH: Isn't it okay to deal with the limited power we have today, because we're still in the stepping stones of VR technology?

PL: It's not just about the graphics being simple. You can have lots of objects in the virtual environment, and it can still cripple the experience. Yes, we are able to make immersive games on VR with simpler graphics on this limited power, but the reality is that our ability to create what we are imagining is being limited by the limited GPU horsepower.

[...] The goal in the long run is not only to sell to people who buy game consoles, but also to people who buy mobile phones. You need to expand so that you can connect hundreds of millions of people to VR. It may not necessarily exist in the form of a phone dropping into a headset, but it will be mobile technologies -- mobile CPUs, mobile graphics cards, etc.

In the future, VR headsets are going to have all the render hardware on board, no longer being hardwired to a PC. A self-contained set of glasses is a whole other level of mainstream.

[More after the Break]

An article about AMD's VR hype/marketing at Gamescom 2015 lays out the "problem" of achieving "absolute immersion" in virtual reality:

Using [pixels per degree (PPD)], AMD calculated the resolution required as part of the recipe for truly immersive virtual reality. There are two parts of the vision to consider: there's the part of human vision that we can see in 3D, and beyond that is our peripheral vision. AMD's calculations take into account only the 3D segment. For good measure, you'd expand it further to include peripheral vision. Horizontally, humans have a 120-degree range of 3D sight, with peripheral vision expanding 30 degrees further each way, totaling 200 degrees of vision. Vertically, we are able to perceive up to 135 degrees in 3D.

With those numbers, and the resolution of the fovea (the most sensitive part of the eye), AMD calculated the required resolution. The fovea sees at about 60 PPD, which combined with 120 degrees of horizontal vision and 135 degrees of vertical vision, and multiplying that by two (because of two eyes) tallies up to a total of 116 megapixels. Yes, you read that right: 116 megapixels. The closest resolution by today's numbers is 16K, or around [132] megapixels.

While 90 Hz (albeit with reduced frame stuttering and minimal latency) is considered a starting point for VR, AMD ultimately wants to reach 200 Hz. Compare that to commercially available 2560×1440 @ 144 Hz monitors or HDMI 2.0 recently adding the ability to transport 3840×2160 @ 60 Hz. The 2016 consumer version of Oculus Rift will use two 1080×1200 panels, for a resolution of 2160×1200 refreshed at 90 Hz. That's over 233 million pixels per second. 116 megapixels times 200 Hz is 23.2 billion pixels per second. It's interesting (but no surprise) that AMD's endgame target for VR would require almost exactly one hundred times the graphics performance of the GPU powering the Rift, which recommends an NVIDIA GTX 970 or AMD Radeon R9 290.

In conclusion, today's consumer VR might deliver an experience that feels novel and worth $300+ to people. It might not make them queasy due to the use of higher framerates and innovations like virtual noses. But if you have the patience to wait for 15 years or so of early adopters to pay for stone/bronze age VR, you can achieve "absolute immersion," also known as enlightenment.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @04:51PM

    by Anonymous Coward on Thursday August 13 2015, @04:51PM (#222391)

    Now _this_ is the answer I was looking for.

    My quick thoughts are:

    - If you go all the way with that idea you might be capable of obtaining a ridiculously low latency because the needed pixel count is quite low indeed. The catch is that that absolutely requires such latency in order to work at all. You'll might even need a display tech that can physically move the sharp spot around because 16K@0.5ms sounds like a pretty tough requirement no matter how low level is the implementation.

    - The opposite end is where you have render plenty of extra area with high resolution and even the rest at "sufficient" level. If you moved immediately from one end to other you'd see some blur but smaller motions wouldn't do that. This still needs maybe 1/10 of pixels of full rendering.

    Skipping focus-following, there's another hack: render to spheremap and use custom hardware very near the display to grid-warp it on fly with very high fps while detaching it from "actual" render rate. Maybe not that sensical at 60fps but could be something with higher rates.

  • (Score: 3, Interesting) by Immerman on Friday August 14 2015, @12:00AM

    by Immerman (3985) on Friday August 14 2015, @12:00AM (#222594)

    UPDATE: After typing all this out I remembered they changed axes when they introduced a 4K, so a 16K screen would only have 8x the linear pixels of 1080p, not 16x, and all the calculations I did would thus be for a 32K screen instead. I don't want to update the math, so instead I'm changing all references to 16K to 32K....

    I don't think there's actually much of a catch - I'm assuming you mean 32K in the "4K UHD TV format" sense, and sure, that would be a ridiculous amount of pixels to push every 0.5ms - what, 530M pixels, 2000 times a second? I want the hardware with that kind of pixel-pushing power!

    But there's a much easier way to do it, though we would need specialized screen-control electronics. Physically moving a sub-screen fast enough is unlikely to be viable thanks to that 900*/second saccade speed, but we don't need to thanks to the fact that current LCD refresh behavior is a historical artifact rather than a technological limitation: Let's say we have a physical 32K-equivalent display in the glasses, the trick is twofold:

    1) at the beginning of each frame we'd render the complete FOV at low resolution, *maybe* HD, and we could likely get away with even lower. Even if it's stretched across a 200*x100*FOV most of our retina has pretty lousy resolution, so it's a non-issue. Send that to the display, and have a simple ultra-low-lag upscaling circuit translate that low-resolution framebuffer to the entire display. Even linear interpolation would likely be more than good enough, we might even be able to get away with just giant square "superpixels". Either way, if it's implemented in the circuitry responsible for communicating the framebuffer with the display matrix it should incur minimal overhead.

    2) Then, multiple times per frame we render a much smaller "fovea-view" window at high resolution. None of the camera or scenery information is changed, we're literally just rendering a tiny portion of the exact same image at much higher resolution. If each eye gets a half-billion pixels evenly spread across the roughly 20,000 solid degrees covering the FOV, covering the maybe 4 solid degrees of fovea will require 5000x fewer pixels, or about 100,000 total, maybe a little 370x280 pixel block. We then send *just* this tiny updated patch of pixels to the display, and only that small sub-block of the display gets updated. This is a already a common feature in the control circuitry for e-ink displays where power-conservation is a high priority.

    That's it. If we render 20 fresh fovea-views per frame (2M pixels) plus the single HD "background" (2M pixels), we've still rendered and transmitted around 132x fewer pixels than needed to fully render the 32K display just once. If we assume a frame rate of 100 FPS, we've got a rate of 2000 "fovea views per second", or 0.5ms per update. Delivering an effectively 32K, 100FPS display at a pixel rate equivalent to driving a 1080p display at 200FPS.