Slash Boxes

SoylentNews is people

posted by martyb on Friday May 01 2020, @12:17AM   Printer-friendly
from the all-your-bits-are-belong-to-us dept.

DisplayPort Alt Mode 2.0 Spec Released: Defining Alt Mode for USB4

As the tech industry gears up for the launch of the new USB4 standard, a few more parts first need to fall into place. Along with the core specification itself, there is the matter of alternate modes, which add further functionality to USB Type-C host ports by allowing the data pins to be used to carry other types of signals. Keeping pace with the updates to USB4, some of the alt modes are being updated as well, and this process is starting with the granddaddy of them all: DisplayPort Alt Mode.

The very first USB-C alt mode, DisplayPort Alt Mode was introduced in 2014. By remapping the USB-C high speed data pins from USB data to DisplayPort data, it became possible to use a USB-C port as a DisplayPort video output, and in some cases even mix the two to get both USB 3.x signaling and DisplayPort signaling over the same cable. As a result of DisplayPort Alt Mode's release, the number of devices with video output has exploded, and in laptops especially, this has become the preferred mode for driving video outputs when a laptop doesn't include a dedicated HDMI port.

If you're willing to accept Display Stream Compression... New DisplayPort spec enables 16K video over USB-C

VESA press release.

Previously: Forget USB 3.2: Thunderbolt 3 Will Become the Basis of USB 4
DisplayPort 2.0 Announced, Triples Bandwidth to ~77.4 Gbps for 8K Displays
Speed-Doubling USB4 is Ready -- Now We Just Have to Wait for Devices

Original Submission

Related Stories

Forget USB 3.2: Thunderbolt 3 Will Become the Basis of USB 4 23 comments

USB 3.2 Gen 2×2 isn't even my final form:

Fulfilling its 2017 promise to make Thunderbolt 3 royalty-free, Intel has given the specification for its high-speed interconnect to the USB Implementers Forum (USB-IF), the industry group that develops the USB specification. The USB-IF has taken the spec and will use it to form the basis of USB4, the next iteration of USB following USB 3.2.

Thunderbolt 3 not only doubles the bandwidth of USB 3.2 Gen 2×2, going from 20Gb/s to 40Gb/s, it also enables the use of multiple data and display protocols simultaneously. We would expect the USB4 specification to be essentially a superset of the Thunderbolt 3 and USB 3.2 specifications, thus incorporating both the traditional USB family of protocols (up to and including the USB 3.2 Gen 2×2) and the Thunderbolt 3 protocol in a single document. Down the line, this should translate into USB4 controllers that support the whole range of speeds.

Lost? Frightened? Confused? Good!

Also at AnandTech, The Verge, and Engadget.

Original Submission

DisplayPort 2.0 Announced, Triples Bandwidth to ~77.4 Gbps for 8K Displays 11 comments

VESA Announces DisplayPort 2.0 Standard: Bandwidth For 8K Monitors & Beyond

While display interface standards are slow to move, at the same time their movement is inexorable: monitor resolutions continue to increase, as do refresh rates and color depths, requiring more and more bandwidth to carry signals for the next generation of monitors. Keeping pace with the demand for bandwidth, the DisplayPort standard, the cornerstone of PC display standards, has now been through several revisions since it was first launched over a decade ago. And now this morning the standard is taking its biggest leap yet with the release of the DisplayPort 2.0 specification. Set to offer nearly triple the available bandwidth of DisplayPort 1.4, the new revision of DisplayPort is almost moving a number of previously optional features into the core standard, creating what's in many ways a new baseline for the interface.

The big news here, of course, is raw bandwidth. The current versions of DisplayPort – 1.3 & 1.4 – offer up to 32.4 Gbps of bandwidth – or 25.9 Gbps after overhead – which is enough for a standard 16.7 million color (24-bit) monitor at up to 120Hz, or up to 98Hz for 1 billion+ (30-bit) monitors. This is a lot of bandwidth, but it still isn't enough for the coming generation of monitors, including the likes of Apple's new 6K Pro Display XDR monitor, and of course, 8K monitors. As a result, the need for more display interface bandwidth continues to grow, with these next-generation monitors set to be the tipping point. And all of this is something that the rival HDMI Forum has already prepared for with their own HDMI 2.1 standard.

DisplayPort 2.0, in turn, is shooting for 8K and above. Introducing not just one but a few different bitrate modes, the fastest mode in DisplayPort 2.0 will top out at 80 Gbps of raw bandwidth, about 2.5 times that of DisplayPort 1.3/1.4. Layered on that, DisplayPort 2.0 also introduces a more efficient coding scheme, resulting in much less coding overhead. As a result, the effective bandwidth of the new standard will peak at 77.4 Gbps, with at 2.98x the bandwidth of the previous standard is just a hair under a full trebling of available bandwidth.

Related: HDMI 2.1 Announced
Linux, Meet DisplayPort
HDMI 2.1 Released
VirtualLink Consortium Announces USB Type-C Specification for VR Headsets

Original Submission

Speed-Doubling USB4 is Ready -- Now We Just Have to Wait for Devices 21 comments

As reported at C|net, USB4 is ready to go.

USB4 is done, the group developing the next version of the immensely successful USB connector technology said Tuesday. USB4 doubles speeds compared to today's fastest USB 3.2 by incorporating Intel's speedy Thunderbolt technology that you already see on high-end laptops and peripherals. The USB Implementers Forum announced the completion of the technical specification Tuesday, a move that frees hardware and software engineers to get cracking building the actual products to support it.

Today's USB 3.2, which enables data transfer speeds up to 20 gigabits per second, is still something of a rarity; most of us have earlier versions of the technology that works at 5Gbps or 10Gbps. USB4 promises a speed boost to 40Gbps, helpful for things like using multiple external displays or fetching files from external hard drives.

What is the Serial Bus equivalent of, "Looks like I'm going to have to buy the White Album again."?

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday May 01 2020, @03:35AM (2 children)

    by Anonymous Coward on Friday May 01 2020, @03:35AM (#988813)

    Alt Mode? Remapped data pins?

    They're just fucking with us now, aren't they?

  • (Score: 2) by Immerman on Friday May 01 2020, @05:31PM (4 children)

    by Immerman (3985) on Friday May 01 2020, @05:31PM (#989060)

    8k at 60Hz is nice I suppose, for that handful of people that can use such resolution. Okay, yeah, I suppose it'd be nice for a 40+" monitor - I can still make out individual pixels at 4k from up close.

    What I really want to know is, is the spec flexible enough to use the same data rate to deliver 2x4k at 120Hz? That starts being really useful for VR/AR applications. One thin cable that can transmit raw (= low lag) video along with all all the tracking information

    • (Score: 2) by takyon on Friday May 01 2020, @06:41PM (3 children)

      by takyon (881) <> on Friday May 01 2020, @06:41PM (#989097) Journal

      Probably: []

      Two 4K (3840 × 2160) displays @ 144 Hz and 8 bpc (24 bit/px, SDR) RGB/Y′CBCR 4:4:4 color (uncompressed)

      [...] Three 4K (3840 × 2160) displays @ 90 Hz and 10 bpc (30 bit/px, HDR) RGB/Y′CBCR 4:4:4 color (uncompressed)

      [...] When using only two lanes on the USB-C connector via DP Alt Mode to allow for simultaneous SuperSpeed USB data and video, DP 2.0 can enable such configurations as:

      Three 4K (3840 × 2160) displays @ 144 Hz and 10 bpc (30 bit/px, HDR) RGB/Y′CBCR 4:4:4 color (with DSC)

      Two 4K × 4K (4096 × 4096) displays (for AR/VR headsets) @ 120 Hz and 10 bpc (30 bit/px, HDR) RGB/Y′CBCR 4:4:4 color (with DSC)

      You said "raw", they are saying "with DSC" for 4K×4K @ 120 Hz and simultaneous tracking information. I think 12 bits-per-color might be better for increasing color depth and using HDR at the same time; I'd have to research that.

      My position is that data cables are a nuisance for VR. Everything is going to go wireless or most likely standalone. And part of the way to achieve that will be by using foveated rendering to reduce the necessary on-board performance and bitrates. I don't even know if these video standards are set up to handle foveated rendering (changing mix of resolutions) whatsoever, so some proprietary solutions might end up being used.

      [SIG] 10/28/2017: Soylent Upgrade v14 []
      • (Score: 2) by Immerman on Friday May 01 2020, @09:22PM (2 children)

        by Immerman (3985) on Friday May 01 2020, @09:22PM (#989201)

        I agree that cables are a nuisance - unfortunately there don't seem to be any wireless technologies on the horizon that can supply the required bandwidth, and most compression algorithms introduce additional lag that can be ill afforded.

        Given the speed and erraticness of eye saccades I have my doubts about the long-term comfort of foveated rendering, but it's certainly an intruiging option. In a few generation of rendering hardware, and with fast enough tracking sensors and predictive algorithms (using eye speed and acceleration to predict where it will stop), it just might be possible to avoid "pop-in" as your eye comes to rest on its new target

        I suspect that the video standards would be fine for it, at least assuming a "smart" headset controller. After all, it's basically just coordinated picture-in-picture, with a parallel data stream (easily encoded in unused portions of the image if nothing else) indicating where the focal picture should be positioned. Plus probably some upscaling and graceful blending between them, but that can be handled 100% in the headset, the video stream doesn't need to know it's happening.

        • (Score: 2) by takyon on Saturday May 02 2020, @01:22AM (1 child)

          by takyon (881) <> on Saturday May 02 2020, @01:22AM (#989281) Journal

          Eye tracking could be an easier problem than you would expect. See this earlier comment [] I made. That researcher used an off-the-shelf FOVE headset.

          There's also Tobii, which has made a 1200 Hz eye tracker for researchers and more modest hardware for VR:

          Tobii Pro Launches New Advancement in Eye Trackers for Behavioral Research []

          I don't think that kind of sampling/tracking rate is necessary at all for VR foveated rendering. It remains to be seen if the Hz even needs to be higher than the display's refresh rate.

          What do you do if the tracking is a little slow? Just render more of the frame in high resolution, with a tiny circle in 16K, 8K in a ring outside of that, and so on. The in-between resolutions would "catch" fast eye movement and could be adjusted as necessary by the headset manufacturer. Throw in screen %s for each of the resolutions, and you can come up with the total number of pixels and possibly get an idea of the performance required. 50% reduction in necessary performance has been estimated, but I still think it could cut it by 90% if done right. This would bolster any headset+GPU that adopts the technique, and especially help ARM-based standalone headsets.

          I could even see desktop monitors integrate eye tracking and foveated rendering. Focus on something in the corner of your current display. Depending on the size and the distance from you to the display, you might not be able to see much detail at all in the opposite corner. This kind of thing would require a depth sensor next to the camera, but that technology was consumerized with Microsoft Kinect way back in 2010 and has been used in smartphones more recently.

          [SIG] 10/28/2017: Soylent Upgrade v14 []
          • (Score: 2) by Immerman on Saturday May 02 2020, @03:08AM

            by Immerman (3985) on Saturday May 02 2020, @03:08AM (#989317)

            Hmm, you might be right. I hadn't actually run the numbers before, and I must admit I greatly overestimated the saccade speed - it seems that even with with very generous margins so that you're unlikely to ever focus outside the foveated render the benefits could be dramatic

              Saccade speeds can exceed 500°/s: []
              So a very fast saccade and a rather nasty 20ms of lag could allow allow for a fovea displacement of 500°/s*.02s = 10° away from the "expected" position, or about 10% of the FOV of a current headset, or about 1% of the visible area.

            Meanwhile the resolution of the fovea itself falls off to about 1/5th the peak resolution at 10° from the focal point (20° total) for a 30° foveated region (~10% of the total screen area) would capture just about everything you could possibly see at higher resolution within the next 20ms. And that's assuming just a single jump to up to 5x the background resolution. At a more comfortable 10ms lag, and with just a smidge of saccade-prediction, you could probably narrow the necessary fovea-rendered region dramatically.

  • (Score: 2) by Username on Saturday May 02 2020, @12:04AM (2 children)

    by Username (4557) on Saturday May 02 2020, @12:04AM (#989268)

    I've been seeing monitors labeled as such, but in all my experience, HDR is when you take multiple exposures, usually incrementally stopped down, to create one photo. This effect usual works well on urban landscapes, or anything you want to look edgy or scary. With everything else it looks like crap. I find people usually prefer softer low stop photos with smooth gradient bokeh for anything with a face in it. Makes the subject pop out. I have doubt people would want this effect on everything they see. I'd assuming this term was repurposed as a generic buzzword to mean high contrast. But, I could be wrong and this becomes a thing like 3D.

    • (Score: 2) by takyon on Saturday May 02 2020, @02:00AM

      by takyon (881) <> on Saturday May 02 2020, @02:00AM (#989293) Journal

      It's High Dynamic Range. It doesn't actually require any kind of real world photography, e.g. a video game can render areas using HDR, using whatever techniques they want to in real time (including "ray-tracing" now). You can see some examples [] here. Some games do it well, others suck at implementing it.

      Amazon and Netflix have pushed to make a lot of HDR content recently. The director of A Series of Unfortunate Events complained about how it looked [], basically calling it a gimmick that ruined the cinematography. YMMV.

      Displays are advertised as having a peak luminance, such as 400, 600, 800, 1,000 [], 3,000, or 10,000 nits (staring directly at the Sun would be 1 billion nits). When an HDR game, movie, or TV show is playing, it will have the normal color information for each pixel, as well as brightness. Emissive display technologies like OLED and MicroLED can adjust the brightness of every single pixel. LCD needs to split the TV/display's backlight into a small number of sections/zones with brightness levels, and has nowhere near the "infinite" contrast level [] of emissive, so it is ultimately a dead end technology used for cheap HDR implementations.

      If I got any of this info wrong, just know I don't own any HDR products.

      [SIG] 10/28/2017: Soylent Upgrade v14 []
    • (Score: 2) by Immerman on Saturday May 02 2020, @03:44AM

      by Immerman (3985) on Saturday May 02 2020, @03:44AM (#989326)

      You're describing "HDR effects", one way to generate HDR datafrom non-HDR originals, usually intentionally used to get fanciful effects.

      At its base though, HDR is really about how two different factors compare: the maximum difference between darkest and lightest part of the scene, and the smallest difference in brightness that can be displayed in the dark part of the scene.

      I'm not 100% sure that HDR video formats follow this, but as I recall HDR image formats are essentially floating point rather than fixed-point, so that you can simultaneously have a wide range of brightness, and extremely small steps in brightness on dark objects.

      The classic example is looking out from a dark cave into a brightly sunlit field. The field will be brilliantly bright enough to hurt your eyes if you look straight at it, while you'll also be able to see the surface details of the cave wall when looking directly at it. Without HDR you have a few options when capturing such an image:
          1) Use a lot more bits per pixel than normal, so you can capture the full range of brighness without losing detail in the darkness
          2) Try to do that without increasing the bpp, and get a lot of "banding" as a result, where gradual changes in brightness fall below the smallest representable brighness step - most especially visible in the dark areas, where detail is almost totally lost.
          3) mute the brightness so that you can see the drawings clearly, but the bright areas are either very washed out, or far dimmer than they should be, depending on your strategy.

      HDR formats are generally a compromised version of (1) and (2), using a few more bits per pixel, but also use a more non-linear "floating point" representation of values, so that the size of the smallest possible brightness step gets larger the brighter the point. Essentaily, you get a lot more fine discrimination of brightness in the darker areas of the image where that fine detail is important, while also being able to capture very much brighter areas in the same image, without dramatically increasing the amount of data needed per pixel.