Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 01 2020, @12:17AM   Printer-friendly
from the all-your-bits-are-belong-to-us dept.

DisplayPort Alt Mode 2.0 Spec Released: Defining Alt Mode for USB4

As the tech industry gears up for the launch of the new USB4 standard, a few more parts first need to fall into place. Along with the core specification itself, there is the matter of alternate modes, which add further functionality to USB Type-C host ports by allowing the data pins to be used to carry other types of signals. Keeping pace with the updates to USB4, some of the alt modes are being updated as well, and this process is starting with the granddaddy of them all: DisplayPort Alt Mode.

The very first USB-C alt mode, DisplayPort Alt Mode was introduced in 2014. By remapping the USB-C high speed data pins from USB data to DisplayPort data, it became possible to use a USB-C port as a DisplayPort video output, and in some cases even mix the two to get both USB 3.x signaling and DisplayPort signaling over the same cable. As a result of DisplayPort Alt Mode's release, the number of devices with video output has exploded, and in laptops especially, this has become the preferred mode for driving video outputs when a laptop doesn't include a dedicated HDMI port.

If you're willing to accept Display Stream Compression... New DisplayPort spec enables 16K video over USB-C

VESA press release.

Previously: Forget USB 3.2: Thunderbolt 3 Will Become the Basis of USB 4
DisplayPort 2.0 Announced, Triples Bandwidth to ~77.4 Gbps for 8K Displays
Speed-Doubling USB4 is Ready -- Now We Just Have to Wait for Devices


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Saturday May 02 2020, @01:22AM (1 child)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday May 02 2020, @01:22AM (#989281) Journal

    Eye tracking could be an easier problem than you would expect. See this earlier comment [soylentnews.org] I made. That researcher used an off-the-shelf FOVE headset.

    There's also Tobii, which has made a 1200 Hz eye tracker for researchers and more modest hardware for VR:

    Tobii Pro Launches New Advancement in Eye Trackers for Behavioral Research [tobii.com]

    I don't think that kind of sampling/tracking rate is necessary at all for VR foveated rendering. It remains to be seen if the Hz even needs to be higher than the display's refresh rate.

    What do you do if the tracking is a little slow? Just render more of the frame in high resolution, with a tiny circle in 16K, 8K in a ring outside of that, and so on. The in-between resolutions would "catch" fast eye movement and could be adjusted as necessary by the headset manufacturer. Throw in screen %s for each of the resolutions, and you can come up with the total number of pixels and possibly get an idea of the performance required. 50% reduction in necessary performance has been estimated, but I still think it could cut it by 90% if done right. This would bolster any headset+GPU that adopts the technique, and especially help ARM-based standalone headsets.

    I could even see desktop monitors integrate eye tracking and foveated rendering. Focus on something in the corner of your current display. Depending on the size and the distance from you to the display, you might not be able to see much detail at all in the opposite corner. This kind of thing would require a depth sensor next to the camera, but that technology was consumerized with Microsoft Kinect way back in 2010 and has been used in smartphones more recently.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Immerman on Saturday May 02 2020, @03:08AM

    by Immerman (3985) on Saturday May 02 2020, @03:08AM (#989317)

    Hmm, you might be right. I hadn't actually run the numbers before, and I must admit I greatly overestimated the saccade speed - it seems that even with with very generous margins so that you're unlikely to ever focus outside the foveated render the benefits could be dramatic

      Saccade speeds can exceed 500°/s: https://en.wikipedia.org/wiki/Saccade#Timing_and_kinematics [wikipedia.org]
      So a very fast saccade and a rather nasty 20ms of lag could allow allow for a fovea displacement of 500°/s*.02s = 10° away from the "expected" position, or about 10% of the FOV of a current headset, or about 1% of the visible area.

    Meanwhile the resolution of the fovea itself falls off to about 1/5th the peak resolution at 10° from the focal point (20° total) for a 30° foveated region (~10% of the total screen area) would capture just about everything you could possibly see at higher resolution within the next 20ms. And that's assuming just a single jump to up to 5x the background resolution. At a more comfortable 10ms lag, and with just a smidge of saccade-prediction, you could probably narrow the necessary fovea-rendered region dramatically.