Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 01 2020, @12:17AM   Printer-friendly
from the all-your-bits-are-belong-to-us dept.

DisplayPort Alt Mode 2.0 Spec Released: Defining Alt Mode for USB4

As the tech industry gears up for the launch of the new USB4 standard, a few more parts first need to fall into place. Along with the core specification itself, there is the matter of alternate modes, which add further functionality to USB Type-C host ports by allowing the data pins to be used to carry other types of signals. Keeping pace with the updates to USB4, some of the alt modes are being updated as well, and this process is starting with the granddaddy of them all: DisplayPort Alt Mode.

The very first USB-C alt mode, DisplayPort Alt Mode was introduced in 2014. By remapping the USB-C high speed data pins from USB data to DisplayPort data, it became possible to use a USB-C port as a DisplayPort video output, and in some cases even mix the two to get both USB 3.x signaling and DisplayPort signaling over the same cable. As a result of DisplayPort Alt Mode's release, the number of devices with video output has exploded, and in laptops especially, this has become the preferred mode for driving video outputs when a laptop doesn't include a dedicated HDMI port.

If you're willing to accept Display Stream Compression... New DisplayPort spec enables 16K video over USB-C

VESA press release.

Previously: Forget USB 3.2: Thunderbolt 3 Will Become the Basis of USB 4
DisplayPort 2.0 Announced, Triples Bandwidth to ~77.4 Gbps for 8K Displays
Speed-Doubling USB4 is Ready -- Now We Just Have to Wait for Devices


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Username on Saturday May 02 2020, @12:04AM (2 children)

    by Username (4557) on Saturday May 02 2020, @12:04AM (#989268)

    I've been seeing monitors labeled as such, but in all my experience, HDR is when you take multiple exposures, usually incrementally stopped down, to create one photo. This effect usual works well on urban landscapes, or anything you want to look edgy or scary. With everything else it looks like crap. I find people usually prefer softer low stop photos with smooth gradient bokeh for anything with a face in it. Makes the subject pop out. I have doubt people would want this effect on everything they see. I'd assuming this term was repurposed as a generic buzzword to mean high contrast. But, I could be wrong and this becomes a thing like 3D.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Saturday May 02 2020, @02:00AM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday May 02 2020, @02:00AM (#989293) Journal

    It's High Dynamic Range. It doesn't actually require any kind of real world photography, e.g. a video game can render areas using HDR, using whatever techniques they want to in real time (including "ray-tracing" now). You can see some examples [resetera.com] here. Some games do it well, others suck at implementing it.

    Amazon and Netflix have pushed to make a lot of HDR content recently. The director of A Series of Unfortunate Events complained about how it looked [archive.org], basically calling it a gimmick that ruined the cinematography. YMMV.

    Displays are advertised as having a peak luminance, such as 400, 600, 800, 1,000 [soylentnews.org], 3,000, or 10,000 nits (staring directly at the Sun would be 1 billion nits). When an HDR game, movie, or TV show is playing, it will have the normal color information for each pixel, as well as brightness. Emissive display technologies like OLED and MicroLED can adjust the brightness of every single pixel. LCD needs to split the TV/display's backlight into a small number of sections/zones with brightness levels, and has nowhere near the "infinite" contrast level [soylentnews.org] of emissive, so it is ultimately a dead end technology used for cheap HDR implementations.

    If I got any of this info wrong, just know I don't own any HDR products.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by Immerman on Saturday May 02 2020, @03:44AM

    by Immerman (3985) on Saturday May 02 2020, @03:44AM (#989326)

    You're describing "HDR effects", one way to generate HDR datafrom non-HDR originals, usually intentionally used to get fanciful effects.

    At its base though, HDR is really about how two different factors compare: the maximum difference between darkest and lightest part of the scene, and the smallest difference in brightness that can be displayed in the dark part of the scene.

    I'm not 100% sure that HDR video formats follow this, but as I recall HDR image formats are essentially floating point rather than fixed-point, so that you can simultaneously have a wide range of brightness, and extremely small steps in brightness on dark objects.

    The classic example is looking out from a dark cave into a brightly sunlit field. The field will be brilliantly bright enough to hurt your eyes if you look straight at it, while you'll also be able to see the surface details of the cave wall when looking directly at it. Without HDR you have a few options when capturing such an image:
        1) Use a lot more bits per pixel than normal, so you can capture the full range of brighness without losing detail in the darkness
        2) Try to do that without increasing the bpp, and get a lot of "banding" as a result, where gradual changes in brightness fall below the smallest representable brighness step - most especially visible in the dark areas, where detail is almost totally lost.
        3) mute the brightness so that you can see the drawings clearly, but the bright areas are either very washed out, or far dimmer than they should be, depending on your strategy.

    HDR formats are generally a compromised version of (1) and (2), using a few more bits per pixel, but also use a more non-linear "floating point" representation of values, so that the size of the smallest possible brightness step gets larger the brighter the point. Essentaily, you get a lot more fine discrimination of brightness in the darker areas of the image where that fine detail is important, while also being able to capture very much brighter areas in the same image, without dramatically increasing the amount of data needed per pixel.