Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday January 19 2017, @08:14PM   Printer-friendly
from the reduce-the-size-of-your-pron-storage dept.

With unlimited data plans becoming increasingly expensive, or subscribers being forced to ditch their unlimited data due to overuse, anything that can reduce the amount of data we download is welcome. This is especially true for media including images or video, and Google just delivered a major gain when it comes to viewing images online.

The clever scientists at Google Research have come up with a new technique for keeping image size to an absolute minimum without sacrificing quality. So good is this new technique that it promises to reduce the size of an image on disk by as much as 75 percent.

The new technique is called RAISR, which stands for "Rapid and Accurate Image Super-Resolution." Typically, reducing the size of an image means lowering its quality or resolution. RAISR works by taking a low-resolution image and upsampling it, which basically means enhancing the detail using filtering. Anyone who's ever tried to do this manually knows that the end result looks a little blurred. RAISR avoids that thanks to machine learning.

[...] RAISR has been trained using low and high quality versions of images. Machine learning allows the system to figure out the best filters to recreate the high quality image using only the low quality version. What you end up with after lots of training is a system that can do the same high quality upsampling on most images without needing the high quality version for reference.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Grishnakh on Thursday January 19 2017, @09:23PM

    by Grishnakh (2831) on Thursday January 19 2017, @09:23PM (#456239)

    Which sounds like an outright lie. You can't retrieve information that isn't there; upsampling just makes something look sharper by interpolating, but it doesn't actually add lost information back to the image. To make an extreme argument, I can replace a picture of a line with a file that just describes two points on that line, and let an interpolation algorithm fill in the rest. But that's not going to replace the little circle that's in the middle of that line, which has been lost due to the low resolution of the file describing two points.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JoeMerchant on Thursday January 19 2017, @10:26PM

    by JoeMerchant (3937) on Thursday January 19 2017, @10:26PM (#456265)

    Presumably, they are doing something "more" than upscaling, which will give (at least qualitatively) better reproduction of the original image than upscaling alone, and require more data as well.

    As far as I can see, it's just another evolution beyond jpeg et.al. with more acceptable quality at the compression rate they are testing it at. Like .mp3 et.al. in the audio space, it's focusing on the things people notice at the expense of reduced detail in the areas that people don't notice.

    ------

    Nothing to see here, move along.

    --
    🌻🌻 [google.com]
  • (Score: 3, Insightful) by Immerman on Friday January 20 2017, @12:26AM

    by Immerman (3985) on Friday January 20 2017, @12:26AM (#456310)

    It's true that you can't restore information that's been removed. But it's also true that there's far more information in your average image than you will notice without exhaustive examination. If done right, those two truth may largely counteract each other.

    I wouldn't trust the detail in an upscaled image for anything important, but how often is there anything important in the details of an image on a web page? How often do you even pay any attention to the detail?

    Meanwhile, even simple bi-cubic upscaling can often reveal a great deal of information that was already present, but heavily obscured by the pixilated noise introduced by rendering pixels as colored blocks rather than sampling points.

  • (Score: 2) by FakeBeldin on Friday January 20 2017, @05:30PM

    by FakeBeldin (3360) on Friday January 20 2017, @05:30PM (#456616) Journal

    The fine summary remarks that they're using machine learning.
    You can think of that as moving details from the image files to the "compression" algorithm.
    An extreme case of this is the tongue-in-cheek LenPEG [dangermouse.net] image compression algorithm.

    At any rate: low-res images for which the high-res version was learned by the algorithm can have details restored, as those details are embedded in the algorithm. The obvious drawback is that images that are sufficiently different from images on which the algorithm was trained will not be correctly transformed, as those details are missing in both the low-res version and the algorithm.

    So probably the algorithm tosses out as much details as can still be reconstructed later - a lot for known images, almost nothing for sufficiently "new" images. Given Google's huge database of images, I wouldn't be surprised if this worked well on 80% of the images.
    And that would be worth the trade off, I think.