Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday January 19 2017, @08:14PM   Printer-friendly
from the reduce-the-size-of-your-pron-storage dept.

With unlimited data plans becoming increasingly expensive, or subscribers being forced to ditch their unlimited data due to overuse, anything that can reduce the amount of data we download is welcome. This is especially true for media including images or video, and Google just delivered a major gain when it comes to viewing images online.

The clever scientists at Google Research have come up with a new technique for keeping image size to an absolute minimum without sacrificing quality. So good is this new technique that it promises to reduce the size of an image on disk by as much as 75 percent.

The new technique is called RAISR, which stands for "Rapid and Accurate Image Super-Resolution." Typically, reducing the size of an image means lowering its quality or resolution. RAISR works by taking a low-resolution image and upsampling it, which basically means enhancing the detail using filtering. Anyone who's ever tried to do this manually knows that the end result looks a little blurred. RAISR avoids that thanks to machine learning.

[...] RAISR has been trained using low and high quality versions of images. Machine learning allows the system to figure out the best filters to recreate the high quality image using only the low quality version. What you end up with after lots of training is a system that can do the same high quality upsampling on most images without needing the high quality version for reference.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by FakeBeldin on Friday January 20 2017, @05:30PM

    by FakeBeldin (3360) on Friday January 20 2017, @05:30PM (#456616) Journal

    The fine summary remarks that they're using machine learning.
    You can think of that as moving details from the image files to the "compression" algorithm.
    An extreme case of this is the tongue-in-cheek LenPEG [dangermouse.net] image compression algorithm.

    At any rate: low-res images for which the high-res version was learned by the algorithm can have details restored, as those details are embedded in the algorithm. The obvious drawback is that images that are sufficiently different from images on which the algorithm was trained will not be correctly transformed, as those details are missing in both the low-res version and the algorithm.

    So probably the algorithm tosses out as much details as can still be reconstructed later - a lot for known images, almost nothing for sufficiently "new" images. Given Google's huge database of images, I wouldn't be surprised if this worked well on 80% of the images.
    And that would be worth the trade off, I think.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2