Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Thursday July 19 2018, @07:30PM   Printer-friendly
from the now-it-looks-like-the-20st-century dept.

VR rivals come together to develop a single-cable spec for VR headsets

Future generations of virtual reality headsets for PCs could use a single USB Type-C cable for both power and data. That's thanks to a new standardized spec from the VirtualLink Consortium, a group made up of GPU vendors AMD and Nvidia and virtual reality rivals Valve, Microsoft, and Facebook-owned Oculus.

The spec uses the USB Type-C connector's "Alternate Mode" capability to implement different data protocols—such as Thunderbolt 3 data or DisplayPort and HDMI video—over the increasingly common cables, combined with Type-C's support for power delivery. The new headset spec combines four lanes of HBR3 ("high bitrate 3") DisplayPort video (for a total of 32.4 gigabits per second of video data), along with a USB 3.1 generation 2 (10 gigabit per second) data channel for sensors and on-headset cameras, along with 27W of electrical power.

That much video data is sufficient for two 3840×2160 streams at 60 frames per second, or even higher frame rates if Display Stream Compression is also used. Drop the resolution to 2560×1440, and two uncompressed 120 frame per second streams would be possible.

Framerate is too low, and it's not wireless. Lame.

VirtualLink website. Also at The Verge.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Thursday July 19 2018, @09:12PM (11 children)

    by Anonymous Coward on Thursday July 19 2018, @09:12PM (#709612)

    Framerate is too low

    It's the lesser of evils. Making a standard for existing devices and those that are just about to come out will hopefully prevent a situation where everyone is doing their own proprietary ports.

    it's not wireless

    Seems going >30-100Gbps wireless is still at the academic working groups' R&D level: https://www.wireless100gb.de/index_en.html [wireless100gb.de]

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 4, Informative) by takyon on Thursday July 19 2018, @09:46PM (10 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday July 19 2018, @09:46PM (#709621) Journal

    AMD Acquires Nitero, a Maker of Wireless Chips for VR Headsets [soylentnews.org]

    Intel to Cease Shipments of Current WiGig Products, Focus on WiGig for VR [soylentnews.org]

    Apple Reportedly Working on Combination VR-AR Headset [soylentnews.org] (grain of salt)

    You're probably right about those high bitrates, but HDMI 2.1 [soylentnews.org] and DisplayPort 1.4 use 3:1 "visually lossless" display stream compression. The HDMI 2.1 table [hdmi.org] lists various resolutions, framerates, and chroma/color depths with bandwidth requirements. 5K/100-120 FPS can be done with as little as 40.1 Gbps. Cut that to 33% and it's only ~13.37 Gbps.

    But we can go even further. The VESA Display Compression-M v1.1 (VDC-M) standard apparently has a 5:1 "visually lossless" compression ratio (probably should have submitted that story). That brings you down to ~8 Gbps, which is in the ballpark of 802.11ad, formerly known as WiGig. That uses a 60 GHz signal that will work within a room, which is good enough to give you some tetherless freedom.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Immerman on Thursday July 19 2018, @11:47PM (9 children)

      by Immerman (3985) on Thursday July 19 2018, @11:47PM (#709672)

      That's great - but how much lag does that compression and decompression introduce? After all, lag is far more important to an enjoyable VR experience than resolution, and VR has to be rendered in real time, which means it also has to be compressed in real time, and every millisecond of lag increases nausea levels. And modern high-end video cards can't even render relatively simplistic cartoon graphics fast enough to eliminate the nausea.

      I'm rather surprised that none of the VR headset makers use diffusion filters over the screen to eliminate the pixelation, after all low detail isn't nearly as obvious as the "screen door" effect. At the simplest you just need a sheet of essentially very homogenous, fine-grained tracing paper on top of the screen, offering just enough diffusion to blend together the edges of adjacent pixels. If you wanted to get fancy you could use a stamped micro-lens array instead of (or in addition to) the diffusion filter to perfectly blend the pixels together, but I'm not sure it would actually add much.

      • (Score: 2) by takyon on Friday July 20 2018, @12:20AM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 20 2018, @12:20AM (#709695) Journal

        If you're at 90-120 FPS, you have a few milliseconds to work with. The headset could have a dedicated component for decompression. However, if the endgame is around 240 FPS [soylentnews.org], things are going to get tricky.

        A tightly integrated standalone VR SoC may be much better equipped to deal with latency than a gaming PC, and would not have to deal with the wireless transmission issue at all. Yes, that means that we eventually want GPU performance that exceeds today's big ~300 W cards in the footprint of a <5 W smartphone SoC.

        If this is anything to go by [soylentnews.org], they want to reduce the screen door effect and other issues by massively increasing pixel density. However, the same exact company (LG) is also experimenting with using a diffusion filter [uploadvr.com]. So there are multiple ways in development to alleviate the issue.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by Immerman on Friday July 20 2018, @01:43AM

          by Immerman (3985) on Friday July 20 2018, @01:43AM (#709739)

          Refresh rate and lag are completely independent properties - it's perfectly possible to run at 120FPS with several seconds of lag - lots of TVs offer 120Hz refresh rates with well over 100ms of lag, meaning that the image displayed is more than 10 frames behind the image received (ignoring the fact that half the frames are actually computer generated "filler" that were never received in the first place) . But if you have more than about 15-20ms of lag in VR you're likely to get sick quickly, and as I recall you have to get down below about 3-5ms before the nausea disappears completely for most people - There is currently no consumer hardware that can get anywhere close to that, and I haven't heard any credible claims that refresh rate has any effect on how much lag is tolerable.

          Shit, they were actually able to patent using a diffusion filter? I sure hope they did something really clever with it, because using a diffusion filter to eliminate screen doors and blockiness is incredibly obvious - I've been advocating it since before the first occulus developer kits were shipped.

      • (Score: 2) by takyon on Friday July 20 2018, @12:42AM (6 children)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday July 20 2018, @12:42AM (#709708) Journal

        Couple things I forgot.

        AMD, Nvidia, ARM, Qualcomm, and the rest of the gang ought to be heavily marketing, praying, sacrificing goats, etc. in order to ensure that VR takes off (such as at least 1 billion casual and many millions of heavy users). Because the desire for ultra-realistic VR can push graphics hardware to its utmost limits, and ensure strong demand up until and even past the end of Moore's slaw scaling (meaning a point where we stop making smaller process nodes but also figure out how to start stacking layers of transistors for CPUs and GPUs, increasing peak performance by further orders of magnitude... while making interconnect issues even more apparent).

        However, a caveat is that algorithmic tricks could reduce the hardware performance needed. For example, Google's [soylentnews.org] Seurat [google.com], or a variety of other graphics techniques and tricks that are discussed on this YouTube channel [youtube.com]. We can't forget that software is improving alongside hardware.

        Had something else but forgot that too.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]