Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Thursday July 19 2018, @07:30PM   Printer-friendly
from the now-it-looks-like-the-20st-century dept.

VR rivals come together to develop a single-cable spec for VR headsets

Future generations of virtual reality headsets for PCs could use a single USB Type-C cable for both power and data. That's thanks to a new standardized spec from the VirtualLink Consortium, a group made up of GPU vendors AMD and Nvidia and virtual reality rivals Valve, Microsoft, and Facebook-owned Oculus.

The spec uses the USB Type-C connector's "Alternate Mode" capability to implement different data protocols—such as Thunderbolt 3 data or DisplayPort and HDMI video—over the increasingly common cables, combined with Type-C's support for power delivery. The new headset spec combines four lanes of HBR3 ("high bitrate 3") DisplayPort video (for a total of 32.4 gigabits per second of video data), along with a USB 3.1 generation 2 (10 gigabit per second) data channel for sensors and on-headset cameras, along with 27W of electrical power.

That much video data is sufficient for two 3840×2160 streams at 60 frames per second, or even higher frame rates if Display Stream Compression is also used. Drop the resolution to 2560×1440, and two uncompressed 120 frame per second streams would be possible.

Framerate is too low, and it's not wireless. Lame.

VirtualLink website. Also at The Verge.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Immerman on Thursday July 19 2018, @11:47PM (9 children)

    by Immerman (3985) on Thursday July 19 2018, @11:47PM (#709672)

    That's great - but how much lag does that compression and decompression introduce? After all, lag is far more important to an enjoyable VR experience than resolution, and VR has to be rendered in real time, which means it also has to be compressed in real time, and every millisecond of lag increases nausea levels. And modern high-end video cards can't even render relatively simplistic cartoon graphics fast enough to eliminate the nausea.

    I'm rather surprised that none of the VR headset makers use diffusion filters over the screen to eliminate the pixelation, after all low detail isn't nearly as obvious as the "screen door" effect. At the simplest you just need a sheet of essentially very homogenous, fine-grained tracing paper on top of the screen, offering just enough diffusion to blend together the edges of adjacent pixels. If you wanted to get fancy you could use a stamped micro-lens array instead of (or in addition to) the diffusion filter to perfectly blend the pixels together, but I'm not sure it would actually add much.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Friday July 20 2018, @12:20AM (1 child)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 20 2018, @12:20AM (#709695) Journal

    If you're at 90-120 FPS, you have a few milliseconds to work with. The headset could have a dedicated component for decompression. However, if the endgame is around 240 FPS [soylentnews.org], things are going to get tricky.

    A tightly integrated standalone VR SoC may be much better equipped to deal with latency than a gaming PC, and would not have to deal with the wireless transmission issue at all. Yes, that means that we eventually want GPU performance that exceeds today's big ~300 W cards in the footprint of a <5 W smartphone SoC.

    If this is anything to go by [soylentnews.org], they want to reduce the screen door effect and other issues by massively increasing pixel density. However, the same exact company (LG) is also experimenting with using a diffusion filter [uploadvr.com]. So there are multiple ways in development to alleviate the issue.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Immerman on Friday July 20 2018, @01:43AM

      by Immerman (3985) on Friday July 20 2018, @01:43AM (#709739)

      Refresh rate and lag are completely independent properties - it's perfectly possible to run at 120FPS with several seconds of lag - lots of TVs offer 120Hz refresh rates with well over 100ms of lag, meaning that the image displayed is more than 10 frames behind the image received (ignoring the fact that half the frames are actually computer generated "filler" that were never received in the first place) . But if you have more than about 15-20ms of lag in VR you're likely to get sick quickly, and as I recall you have to get down below about 3-5ms before the nausea disappears completely for most people - There is currently no consumer hardware that can get anywhere close to that, and I haven't heard any credible claims that refresh rate has any effect on how much lag is tolerable.

      Shit, they were actually able to patent using a diffusion filter? I sure hope they did something really clever with it, because using a diffusion filter to eliminate screen doors and blockiness is incredibly obvious - I've been advocating it since before the first occulus developer kits were shipped.

  • (Score: 2) by takyon on Friday July 20 2018, @12:42AM (6 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 20 2018, @12:42AM (#709708) Journal

    Couple things I forgot.

    AMD, Nvidia, ARM, Qualcomm, and the rest of the gang ought to be heavily marketing, praying, sacrificing goats, etc. in order to ensure that VR takes off (such as at least 1 billion casual and many millions of heavy users). Because the desire for ultra-realistic VR can push graphics hardware to its utmost limits, and ensure strong demand up until and even past the end of Moore's slaw scaling (meaning a point where we stop making smaller process nodes but also figure out how to start stacking layers of transistors for CPUs and GPUs, increasing peak performance by further orders of magnitude... while making interconnect issues even more apparent).

    However, a caveat is that algorithmic tricks could reduce the hardware performance needed. For example, Google's [soylentnews.org] Seurat [google.com], or a variety of other graphics techniques and tricks that are discussed on this YouTube channel [youtube.com]. We can't forget that software is improving alongside hardware.

    Had something else but forgot that too.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]