VR rivals come together to develop a single-cable spec for VR headsets
Future generations of virtual reality headsets for PCs could use a single USB Type-C cable for both power and data. That's thanks to a new standardized spec from the VirtualLink Consortium, a group made up of GPU vendors AMD and Nvidia and virtual reality rivals Valve, Microsoft, and Facebook-owned Oculus.The spec uses the USB Type-C connector's "Alternate Mode" capability to implement different data protocols—such as Thunderbolt 3 data or DisplayPort and HDMI video—over the increasingly common cables, combined with Type-C's support for power delivery. The new headset spec combines four lanes of HBR3 ("high bitrate 3") DisplayPort video (for a total of 32.4 gigabits per second of video data), along with a USB 3.1 generation 2 (10 gigabit per second) data channel for sensors and on-headset cameras, along with 27W of electrical power.That much video data is sufficient for two 3840×2160 streams at 60 frames per second, or even higher frame rates if Display Stream Compression is also used. Drop the resolution to 2560×1440, and two uncompressed 120 frame per second streams would be possible.
Future generations of virtual reality headsets for PCs could use a single USB Type-C cable for both power and data. That's thanks to a new standardized spec from the VirtualLink Consortium, a group made up of GPU vendors AMD and Nvidia and virtual reality rivals Valve, Microsoft, and Facebook-owned Oculus.
The spec uses the USB Type-C connector's "Alternate Mode" capability to implement different data protocols—such as Thunderbolt 3 data or DisplayPort and HDMI video—over the increasingly common cables, combined with Type-C's support for power delivery. The new headset spec combines four lanes of HBR3 ("high bitrate 3") DisplayPort video (for a total of 32.4 gigabits per second of video data), along with a USB 3.1 generation 2 (10 gigabit per second) data channel for sensors and on-headset cameras, along with 27W of electrical power.
That much video data is sufficient for two 3840×2160 streams at 60 frames per second, or even higher frame rates if Display Stream Compression is also used. Drop the resolution to 2560×1440, and two uncompressed 120 frame per second streams would be possible.
Framerate is too low, and it's not wireless. Lame.
VirtualLink website. Also at The Verge.
That's great - but how much lag does that compression and decompression introduce? After all, lag is far more important to an enjoyable VR experience than resolution, and VR has to be rendered in real time, which means it also has to be compressed in real time, and every millisecond of lag increases nausea levels. And modern high-end video cards can't even render relatively simplistic cartoon graphics fast enough to eliminate the nausea.
I'm rather surprised that none of the VR headset makers use diffusion filters over the screen to eliminate the pixelation, after all low detail isn't nearly as obvious as the "screen door" effect. At the simplest you just need a sheet of essentially very homogenous, fine-grained tracing paper on top of the screen, offering just enough diffusion to blend together the edges of adjacent pixels. If you wanted to get fancy you could use a stamped micro-lens array instead of (or in addition to) the diffusion filter to perfectly blend the pixels together, but I'm not sure it would actually add much.
If you're at 90-120 FPS, you have a few milliseconds to work with. The headset could have a dedicated component for decompression. However, if the endgame is around 240 FPS [soylentnews.org], things are going to get tricky.
A tightly integrated standalone VR SoC may be much better equipped to deal with latency than a gaming PC, and would not have to deal with the wireless transmission issue at all. Yes, that means that we eventually want GPU performance that exceeds today's big ~300 W cards in the footprint of a <5 W smartphone SoC.
If this is anything to go by [soylentnews.org], they want to reduce the screen door effect and other issues by massively increasing pixel density. However, the same exact company (LG) is also experimenting with using a diffusion filter [uploadvr.com]. So there are multiple ways in development to alleviate the issue.
Refresh rate and lag are completely independent properties - it's perfectly possible to run at 120FPS with several seconds of lag - lots of TVs offer 120Hz refresh rates with well over 100ms of lag, meaning that the image displayed is more than 10 frames behind the image received (ignoring the fact that half the frames are actually computer generated "filler" that were never received in the first place) . But if you have more than about 15-20ms of lag in VR you're likely to get sick quickly, and as I recall you have to get down below about 3-5ms before the nausea disappears completely for most people - There is currently no consumer hardware that can get anywhere close to that, and I haven't heard any credible claims that refresh rate has any effect on how much lag is tolerable.
Shit, they were actually able to patent using a diffusion filter? I sure hope they did something really clever with it, because using a diffusion filter to eliminate screen doors and blockiness is incredibly obvious - I've been advocating it since before the first occulus developer kits were shipped.
Couple things I forgot.
AMD, Nvidia, ARM, Qualcomm, and the rest of the gang ought to be heavily marketing, praying, sacrificing goats, etc. in order to ensure that VR takes off (such as at least 1 billion casual and many millions of heavy users). Because the desire for ultra-realistic VR can push graphics hardware to its utmost limits, and ensure strong demand up until and even past the end of Moore's slaw scaling (meaning a point where we stop making smaller process nodes but also figure out how to start stacking layers of transistors for CPUs and GPUs, increasing peak performance by further orders of magnitude... while making interconnect issues even more apparent).
However, a caveat is that algorithmic tricks could reduce the hardware performance needed. For example, Google's [soylentnews.org] Seurat [google.com], or a variety of other graphics techniques and tricks that are discussed on this YouTube channel [youtube.com]. We can't forget that software is improving alongside hardware.
Had something else but forgot that too.
push graphics hardware to its utmost limits
What, mining is not enough for them? Or do they think mining is about to die?
Analysts Concerned About Crypto Mining Impact on AMD Share Price [cointelegraph.com]Cryptocurrency Mining Affects AMD Stock while Nvidia Overestimates GPU Demand [ccn.com]Major AMD Radeon Add-in-Board Partner TUL Corporation Lost 60% of Revenue In Wake Of Crypto Slump [wccftech.com]Why GPU Pricing Is About to Drop Even Further [tomshardware.com]Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices [ccn.com]
Could it come back with a vengeance, even years down the line [cointelegraph.com]? Sure. But there's no guarantee that miners will be using GPUs either.
So, do you think mining is sick for a near future? (no trolling, honest) I happen to think the same but all around me screaming rebound.
If ordinary folks don't/can't use the blockchain funny money, and big investors are scared off by the regulators, then the bubble will burst. That's not to say it won't continue to exist, but it can continue to do so without having over $100 billion market cap (total of all cryptocurrencies).
https://www.ccn.com/cftc-issues-new-warning-on-utility-tokens-other-cryptocurrencies/ [ccn.com]https://www.ccn.com/bitcoins-killer-app-is-ransomware-not-payments-stripe-coo/ [ccn.com]
At least 1 billion? Are there even that many people who own game consoles or PCs?
There are apparently at least 2 billion people using smartphones, and plans to reach more people. Smartphones can be turned into basic VR headsets with hardware as simple as cardboard and lenses.
This source says over 4 billion Internet users, although many could be on dumbphones or other primitive devices. (Note that Mexico is included in Latin America and not North America in their table.)
Instagram supposedly has 1 billion active users monthly. Facebook has over 2 billion.
My scenario gives AMD, Nvidia, et al. some years to figure out VR. Sales aren't fantastic [digitaltrends.com], content and games aren't ubiquitous. 360-degree or 180-degree cameras aren't commonplace. But you have Oculus Go lowering the cost for a decent standalone device to $200, and something like Gear VR or Daydream View can be had for $50-100 (or free as a promo offer).
At the end of the day, content is king, and there needs to be a lot more of it to keep people interested. 360-degree video is available, and live 360-degree video is possible. Oculus is trying to push live sports and entertainment. There are games and other stuff (e.g. SpaceEngine) that can be adapted to VR fairly easily. And of course, who can forget the porn. People can also use a headset for a virtual desktop or cinema (displaying high-resolution 2D video over a wide field of view, simulating a theater experience).
We're still at a point where early adopters are getting hosed by crappy products. For example, the Oculus Go has 3 rather than 6 degrees of freedom. Frame rates and resolutions could be a lot higher, and field of view could be much wider (~200 degrees should be the target, not ~100-110°). Combine that with a meager flow of new content, and I see no reason to get one right now.