VR rivals come together to develop a single-cable spec for VR headsets
Future generations of virtual reality headsets for PCs could use a single USB Type-C cable for both power and data. That's thanks to a new standardized spec from the VirtualLink Consortium, a group made up of GPU vendors AMD and Nvidia and virtual reality rivals Valve, Microsoft, and Facebook-owned Oculus.
The spec uses the USB Type-C connector's "Alternate Mode" capability to implement different data protocols—such as Thunderbolt 3 data or DisplayPort and HDMI video—over the increasingly common cables, combined with Type-C's support for power delivery. The new headset spec combines four lanes of HBR3 ("high bitrate 3") DisplayPort video (for a total of 32.4 gigabits per second of video data), along with a USB 3.1 generation 2 (10 gigabit per second) data channel for sensors and on-headset cameras, along with 27W of electrical power.
That much video data is sufficient for two 3840×2160 streams at 60 frames per second, or even higher frame rates if Display Stream Compression is also used. Drop the resolution to 2560×1440, and two uncompressed 120 frame per second streams would be possible.
Framerate is too low, and it's not wireless. Lame.
VirtualLink website. Also at The Verge.
Related Stories
HTC has hit back against claims of declining VR sales figures:
The blog post in particular references a report from Digital Trends which talks about VR sales figures from Amazon, and proceeds to point out a number of ways which the data presented could be misleading.
Several points made by HTC Vive are ones that have also been addressed by VRFocus, as seen in an article about the modern VR cycle, and some comments in the weekly VR vs. article. HTC Vive were not pulling punches right from the very start, evening saying in the introduction: "Analyst reports are in and apparently, it's curtains for Virtual Reality (VR). Pardon us if we're not heeding the alarms. News of the so-called death of VR comes once a year and is greatly exaggerated."
From there, the blog post proceeds in a point-by-point fashion, discussing how early consumer VR was largely driven by smartphone-based devices such as the Samsung Gear VR and Google Cardboard. Not only have these devices been superseded by standalone units like the Oculus Go, which offer a better visual experience, but the promotional offers which were available for phone launches have now long since passed. HTC Vive also point out that PC-based VR companies are yet to release any solid sales figures, and that much of the growth of premium VR has been centered around location-based VR centres, something which the Digital Trends report did not address.
Related: HTC's Vive Pro to Launch on April 5
Facebook Launches Oculus Go, a $200 Standalone VR Headset
VirtualLink Consortium Announces USB Type-C Specification for VR Headsets
NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More
The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.
NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.
The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.
New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.
Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.
Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.
Also at Engadget, Notebookcheck, and The Verge.
See also: What is Ray Tracing and Why Do You Want it in Your GPU?
Qualcomm's new Wi-Fi chips are meant to rival 5G speeds
Qualcomm is launching a family of chips that can add incredibly high-speed Wi-Fi — at speeds up to 10 gigabits per second — to phones, laptops, routers, and so on. It's the start of a new generation of this super-fast Wi-Fi standard, but it isn't going to be used to speed up your typical web browsing. And whether it catches on at all remains an open question.
[...] WiGig relies on a connection standard known as 802.11ad, which can hit speeds up to 5 gigabits per second over close to 10 meters, according to Dino Bekis, the head of Qualcomm's mobile and compute connectivity group. Qualcomm's latest chips move WiGig up to a new generation of that wireless standard, called 802.11ay, which Bekis says can reach speeds twice as fast, and can do so up to 100 meter away. The Wi-Fi Alliance says the new standard "increases the peak data rates of WiGig and improves spectrum efficiency and reduces latency."
So why not just use this as normal Wi-Fi, given how fast it gets? Because that range is only line-of-sight — when there's literally nothing in the way between the transmitter and the receiver. This high-speed Wi-Fi is based on millimeter wave radio waves in the 60GHz range. That means it's really fast, but also that it has a very difficult time penetrating obstacles, like a wall. That's a problem if you want a general purpose wireless technology.
[...] It's not clear if this will really catch on, though. While there's definitely room for adoption from VR gamers, the earlier version of this tech has found minimal pickup in its couple years on the market. Asus recently made interesting use of it with the ROG Phone, which is designed for gamers. And Qualcomm says it's working with Facebook to use this tech for its Terragraph project, which wirelessly delivers home internet connections.
With 5:1 "visually lossless" compression, 10 Gbps could be enough for 5K @ 120 Hz.
Also at Engadget.
Related: AMD Acquires Nitero, a Maker of Wireless Chips for VR Headsets
Intel to Cease Shipments of Current WiGig Products, Focus on WiGig for VR
VirtualLink Consortium Announces USB Type-C Specification for VR Headsets
Wi-Fi Alliance Rebrands Wi-Fi Standards
VESA Announces DisplayPort 2.0 Standard: Bandwidth For 8K Monitors & Beyond
While display interface standards are slow to move, at the same time their movement is inexorable: monitor resolutions continue to increase, as do refresh rates and color depths, requiring more and more bandwidth to carry signals for the next generation of monitors. Keeping pace with the demand for bandwidth, the DisplayPort standard, the cornerstone of PC display standards, has now been through several revisions since it was first launched over a decade ago. And now this morning the standard is taking its biggest leap yet with the release of the DisplayPort 2.0 specification. Set to offer nearly triple the available bandwidth of DisplayPort 1.4, the new revision of DisplayPort is almost moving a number of previously optional features into the core standard, creating what's in many ways a new baseline for the interface.
The big news here, of course, is raw bandwidth. The current versions of DisplayPort – 1.3 & 1.4 – offer up to 32.4 Gbps of bandwidth – or 25.9 Gbps after overhead – which is enough for a standard 16.7 million color (24-bit) monitor at up to 120Hz, or up to 98Hz for 1 billion+ (30-bit) monitors. This is a lot of bandwidth, but it still isn't enough for the coming generation of monitors, including the likes of Apple's new 6K Pro Display XDR monitor, and of course, 8K monitors. As a result, the need for more display interface bandwidth continues to grow, with these next-generation monitors set to be the tipping point. And all of this is something that the rival HDMI Forum has already prepared for with their own HDMI 2.1 standard.
DisplayPort 2.0, in turn, is shooting for 8K and above. Introducing not just one but a few different bitrate modes, the fastest mode in DisplayPort 2.0 will top out at 80 Gbps of raw bandwidth, about 2.5 times that of DisplayPort 1.3/1.4. Layered on that, DisplayPort 2.0 also introduces a more efficient coding scheme, resulting in much less coding overhead. As a result, the effective bandwidth of the new standard will peak at 77.4 Gbps, with at 2.98x the bandwidth of the previous standard is just a hair under a full trebling of available bandwidth.
Related: HDMI 2.1 Announced
Linux, Meet DisplayPort
HDMI 2.1 Released
VirtualLink Consortium Announces USB Type-C Specification for VR Headsets
(Score: 1, Insightful) by Anonymous Coward on Thursday July 19 2018, @09:12PM (11 children)
It's the lesser of evils. Making a standard for existing devices and those that are just about to come out will hopefully prevent a situation where everyone is doing their own proprietary ports.
Seems going >30-100Gbps wireless is still at the academic working groups' R&D level: https://www.wireless100gb.de/index_en.html [wireless100gb.de]
(Score: 4, Informative) by takyon on Thursday July 19 2018, @09:46PM (10 children)
AMD Acquires Nitero, a Maker of Wireless Chips for VR Headsets [soylentnews.org]
Intel to Cease Shipments of Current WiGig Products, Focus on WiGig for VR [soylentnews.org]
Apple Reportedly Working on Combination VR-AR Headset [soylentnews.org] (grain of salt)
You're probably right about those high bitrates, but HDMI 2.1 [soylentnews.org] and DisplayPort 1.4 use 3:1 "visually lossless" display stream compression. The HDMI 2.1 table [hdmi.org] lists various resolutions, framerates, and chroma/color depths with bandwidth requirements. 5K/100-120 FPS can be done with as little as 40.1 Gbps. Cut that to 33% and it's only ~13.37 Gbps.
But we can go even further. The VESA Display Compression-M v1.1 (VDC-M) standard apparently has a 5:1 "visually lossless" compression ratio (probably should have submitted that story). That brings you down to ~8 Gbps, which is in the ballpark of 802.11ad, formerly known as WiGig. That uses a 60 GHz signal that will work within a room, which is good enough to give you some tetherless freedom.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Immerman on Thursday July 19 2018, @11:47PM (9 children)
That's great - but how much lag does that compression and decompression introduce? After all, lag is far more important to an enjoyable VR experience than resolution, and VR has to be rendered in real time, which means it also has to be compressed in real time, and every millisecond of lag increases nausea levels. And modern high-end video cards can't even render relatively simplistic cartoon graphics fast enough to eliminate the nausea.
I'm rather surprised that none of the VR headset makers use diffusion filters over the screen to eliminate the pixelation, after all low detail isn't nearly as obvious as the "screen door" effect. At the simplest you just need a sheet of essentially very homogenous, fine-grained tracing paper on top of the screen, offering just enough diffusion to blend together the edges of adjacent pixels. If you wanted to get fancy you could use a stamped micro-lens array instead of (or in addition to) the diffusion filter to perfectly blend the pixels together, but I'm not sure it would actually add much.
(Score: 2) by takyon on Friday July 20 2018, @12:20AM (1 child)
If you're at 90-120 FPS, you have a few milliseconds to work with. The headset could have a dedicated component for decompression. However, if the endgame is around 240 FPS [soylentnews.org], things are going to get tricky.
A tightly integrated standalone VR SoC may be much better equipped to deal with latency than a gaming PC, and would not have to deal with the wireless transmission issue at all. Yes, that means that we eventually want GPU performance that exceeds today's big ~300 W cards in the footprint of a <5 W smartphone SoC.
If this is anything to go by [soylentnews.org], they want to reduce the screen door effect and other issues by massively increasing pixel density. However, the same exact company (LG) is also experimenting with using a diffusion filter [uploadvr.com]. So there are multiple ways in development to alleviate the issue.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by Immerman on Friday July 20 2018, @01:43AM
Refresh rate and lag are completely independent properties - it's perfectly possible to run at 120FPS with several seconds of lag - lots of TVs offer 120Hz refresh rates with well over 100ms of lag, meaning that the image displayed is more than 10 frames behind the image received (ignoring the fact that half the frames are actually computer generated "filler" that were never received in the first place) . But if you have more than about 15-20ms of lag in VR you're likely to get sick quickly, and as I recall you have to get down below about 3-5ms before the nausea disappears completely for most people - There is currently no consumer hardware that can get anywhere close to that, and I haven't heard any credible claims that refresh rate has any effect on how much lag is tolerable.
Shit, they were actually able to patent using a diffusion filter? I sure hope they did something really clever with it, because using a diffusion filter to eliminate screen doors and blockiness is incredibly obvious - I've been advocating it since before the first occulus developer kits were shipped.
(Score: 2) by takyon on Friday July 20 2018, @12:42AM (6 children)
Couple things I forgot.
AMD, Nvidia, ARM, Qualcomm, and the rest of the gang ought to be heavily marketing, praying, sacrificing goats, etc. in order to ensure that VR takes off (such as at least 1 billion casual and many millions of heavy users). Because the desire for ultra-realistic VR can push graphics hardware to its utmost limits, and ensure strong demand up until and even past the end of Moore's slaw scaling (meaning a point where we stop making smaller process nodes but also figure out how to start stacking layers of transistors for CPUs and GPUs, increasing peak performance by further orders of magnitude... while making interconnect issues even more apparent).
However, a caveat is that algorithmic tricks could reduce the hardware performance needed. For example, Google's [soylentnews.org] Seurat [google.com], or a variety of other graphics techniques and tricks that are discussed on this YouTube channel [youtube.com]. We can't forget that software is improving alongside hardware.
Had something else but forgot that too.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by legont on Friday July 20 2018, @12:58AM (3 children)
What, mining is not enough for them? Or do they think mining is about to die?
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by takyon on Friday July 20 2018, @01:16AM (2 children)
Analysts Concerned About Crypto Mining Impact on AMD Share Price [cointelegraph.com]
Cryptocurrency Mining Affects AMD Stock while Nvidia Overestimates GPU Demand [ccn.com]
Major AMD Radeon Add-in-Board Partner TUL Corporation Lost 60% of Revenue In Wake Of Crypto Slump [wccftech.com]
Why GPU Pricing Is About to Drop Even Further [tomshardware.com]
Gamers’ Relief: Bitcoin Bear Period is Bringing Down High-End GPU Prices [ccn.com]
Could it come back with a vengeance, even years down the line [cointelegraph.com]? Sure. But there's no guarantee that miners will be using GPUs either.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by legont on Friday July 20 2018, @01:28AM (1 child)
So, do you think mining is sick for a near future? (no trolling, honest) I happen to think the same but all around me screaming rebound.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2) by takyon on Friday July 20 2018, @02:36AM
If ordinary folks don't/can't use the blockchain funny money, and big investors are scared off by the regulators, then the bubble will burst. That's not to say it won't continue to exist, but it can continue to do so without having over $100 billion market cap (total of all cryptocurrencies).
https://www.ccn.com/cftc-issues-new-warning-on-utility-tokens-other-cryptocurrencies/ [ccn.com]
https://www.ccn.com/bitcoins-killer-app-is-ransomware-not-payments-stripe-coo/ [ccn.com]
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Friday July 20 2018, @06:39PM (1 child)
At least 1 billion? Are there even that many people who own game consoles or PCs?
(Score: 2) by takyon on Friday July 20 2018, @09:24PM
There are apparently at least 2 billion people using smartphones, and plans to reach more people. Smartphones can be turned into basic VR headsets with hardware as simple as cardboard and lenses.
https://qz.com/986042/google-goog-designed-android-go-to-win-over-the-next-billion-smartphone-users-in-the-developing-world/ [qz.com]
This source says over 4 billion Internet users, although many could be on dumbphones or other primitive devices. (Note that Mexico is included in Latin America and not North America in their table.)
https://www.internetworldstats.com/stats.htm [internetworldstats.com]
Instagram supposedly has 1 billion active users monthly. Facebook has over 2 billion.
My scenario gives AMD, Nvidia, et al. some years to figure out VR. Sales aren't fantastic [digitaltrends.com], content and games aren't ubiquitous. 360-degree or 180-degree cameras aren't commonplace. But you have Oculus Go lowering the cost for a decent standalone device to $200, and something like Gear VR or Daydream View can be had for $50-100 (or free as a promo offer).
At the end of the day, content is king, and there needs to be a lot more of it to keep people interested. 360-degree video is available, and live 360-degree video is possible. Oculus is trying to push live sports and entertainment. There are games and other stuff (e.g. SpaceEngine) that can be adapted to VR fairly easily. And of course, who can forget the porn. People can also use a headset for a virtual desktop or cinema (displaying high-resolution 2D video over a wide field of view, simulating a theater experience).
We're still at a point where early adopters are getting hosed by crappy products. For example, the Oculus Go has 3 rather than 6 degrees of freedom. Frame rates and resolutions could be a lot higher, and field of view could be much wider (~200 degrees should be the target, not ~100-110°). Combine that with a meager flow of new content, and I see no reason to get one right now.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Sunday July 22 2018, @02:03PM
Interested people want to know