Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Fnord666 on Saturday July 04 2020, @03:10PM   Printer-friendly
from the making-a-mountain-out-of-a-mole-hill dept.

Neural SuperSampling Is a Hardware Agnostic DLSS Alternative by Facebook

A new paper published by Facebook researchers just ahead of SIGGRAPH 2020 introduces neural supersampling, a machine learning-based upsampling approach not too dissimilar from NVIDIA's Deep Learning Super Sampling. However, neural supersampling does not require any proprietary hardware or software to run and its results are quite impressive as you can see in the example images, with researchers comparing them to the quality we've come to expect from DLSS.

Video examples on Facebook's blog post.

The researchers use some extremely low-fi upscales to make their point, but you could also imagine scaling from a resolution like 1080p straight to 8K. Upscaling could be combined with eye tracking and foveated rendering to reduce rendering times even further.

Also at UploadVR and VentureBeat.

Journal Reference:
Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, Anton Kaplanyan,Neural Supersampling for Real-time Rendering - Facebook Research, (DOI: https://research.fb.com/publications/neural-supersampling-for-real-time-rendering/)

Related: With Google's RAISR, Images Can be Up to 75% Smaller Without Losing Detail
Nvidia's Turing GPU Pricing and Performance "Poorly Received"
HD Emulation Mod Makes "Mode 7" SNES Games Look Like New
Neural Networks Upscale Film From 1896 to 4K, Make It Look Like It Was Shot on a Modern Smartphone
Apple Goes on an Acquisition Spree, Turns Attention to NextVR


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rleigh on Sunday July 05 2020, @10:18AM (1 child)

    by rleigh (4887) on Sunday July 05 2020, @10:18AM (#1016466) Homepage

    The problem with all of these "machine learning" reconstruction techniques lies in the assumptions they make. The trained model might be able to do a good job on the images it has been trained for, but this doesn't mean it can do a good job for any other images. What's shown here is truly impressive, but one does wonder where all the extra detail came from if it wasn't present in the source image(s).

    Take the image later in the article with the letters "SH" on the wall. The input image is just a handful of spotty red pixels. There's no way you can use a single frame to extrapolate the precise lines of the text. Looking at the video, the aliased text in the low resolution image over time might allow such a reconstruction, but the upsampled copy is good from the first frame on. I'm a bit sceptical of just how many baked in assumptions there are in the model to allow it to do that. Same applies to the serifs on the text in the first image as well as the very detailed textures on the floor and the couch.

    The issue I have with much of this is it's difficult to tell the difference between "reconstruction" and "invention". How do we tell the difference between an accurate reconstruction of the original image, and mere filling in with invented detail. For artistic purposes like games, it's not overly important. But for medical imaging it really does matter. But people are trying to use it for that type of purpose, despite the strong possibility that it's largely fictional. The same applies to using machine learning for medical diagnoses. You cannot invent details which are not present in the source image.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Sunday July 05 2020, @12:15PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday July 05 2020, @12:15PM (#1016481) Journal

    The target of this paper is VR headsets, specifically Oculus which Facebook owns.

    For medical imaging, does "inventing" details help the algorithm get better detection rates and lower false positives? If not, then it's not an improvement.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]