Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday January 19 2017, @08:14PM   Printer-friendly
from the reduce-the-size-of-your-pron-storage dept.

With unlimited data plans becoming increasingly expensive, or subscribers being forced to ditch their unlimited data due to overuse, anything that can reduce the amount of data we download is welcome. This is especially true for media including images or video, and Google just delivered a major gain when it comes to viewing images online.

The clever scientists at Google Research have come up with a new technique for keeping image size to an absolute minimum without sacrificing quality. So good is this new technique that it promises to reduce the size of an image on disk by as much as 75 percent.

The new technique is called RAISR, which stands for "Rapid and Accurate Image Super-Resolution." Typically, reducing the size of an image means lowering its quality or resolution. RAISR works by taking a low-resolution image and upsampling it, which basically means enhancing the detail using filtering. Anyone who's ever tried to do this manually knows that the end result looks a little blurred. RAISR avoids that thanks to machine learning.

[...] RAISR has been trained using low and high quality versions of images. Machine learning allows the system to figure out the best filters to recreate the high quality image using only the low quality version. What you end up with after lots of training is a system that can do the same high quality upsampling on most images without needing the high quality version for reference.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bob_super on Thursday January 19 2017, @08:45PM

    by bob_super (1357) on Thursday January 19 2017, @08:45PM (#456214)

    Yup, they might need to brush up on it.
    Smartly upscaling a low-rez image can create a nice sharp image. But it will fill in the blanks with information that is different from the original.
    I guess in the era of photshop-everything and look-at-my-breakfast-plate, it's not a big deal, until the wrong black guy gets sent to the Chair.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Thursday January 19 2017, @08:57PM

    by Anonymous Coward on Thursday January 19 2017, @08:57PM (#456221)

    i think you just sold the product; it's a convenient scapegoat to deny wrongdoing. The machine said so, your honor.

  • (Score: 1, Interesting) by Anonymous Coward on Thursday January 19 2017, @10:18PM

    by Anonymous Coward on Thursday January 19 2017, @10:18PM (#456261)

    Here's the thing - not every bit carries an identical amount of information.
    What they've done is "put" the information into the upscaling algorithms - or more precisely into the choice of aglorithms, and the parameters to the algorithms.

    There is still some loss of information. Just not as much as there would be in a naive application of upscaling.

    Keep in mind that the target application is not archival, its just for display to humans doing non-critical viewing on their phones.
    So loss of things like shadow details and small color gradients aren't considered too important because they aren't readily apparent to the human eye,

    • (Score: 2) by bob_super on Thursday January 19 2017, @10:36PM

      by bob_super (1357) on Thursday January 19 2017, @10:36PM (#456270)

      Would you mind reading TFS's title again?
      without losing detail is most definitely incorrect.

      • (Score: 0) by Anonymous Coward on Friday January 20 2017, @12:03AM

        by Anonymous Coward on Friday January 20 2017, @12:03AM (#456304)

        I'm sorry. I thought were talking about the actual system google developed. Not whatever some dumbshit wrote when they submitted the summary to soylent.

        Your criticism is so much more informative.
        Carry on!

    • (Score: 0) by Anonymous Coward on Thursday January 19 2017, @11:15PM

      by Anonymous Coward on Thursday January 19 2017, @11:15PM (#456284)

      They're not losing any information. They are adding information by trying to be clever in figuring out what detail is supposed to be there. The point is correct, that you can't add detail that isn't there to begin with. Their deep learning algorithms have been trained on lots of real-life pictures so it makes the best guess what filtering algorithms to use to put information into the image. But keep in mind that the information, the detail, they are putting in isn't in the picture to begin with.

      For instance, you can look at a low-resolution image of a circle, and if you recognize it to be a circle, you can redraw it in very sharp and fine detail. However, if what was in that low-resolution image was really not a circle, but a very round ellipse, but you can't tell because of the resolution, you might redraw it as a very fine and sharp circle, but you've added that information yourself.

  • (Score: 2) by JoeMerchant on Friday January 20 2017, @03:22AM

    by JoeMerchant (3937) on Friday January 20 2017, @03:22AM (#456358)

    There's Shannon's information theory where bits are bits and you can only push so many bits per second through a channel with so much bandwidth.

    Then there's the Fraunhofer style of information theory where some bits are more important than others, so preserve the ones that are perceived by people and discard those that won't be missed.

    All this crap about "upscaling the image" is oversimplification to make a tech article that people think they understand in a 30 second skimming. There's something like upscaling going on in there, but if that's all that's going on in there, this wouldn't have been news 20 years ago... cubic spline interpolation has been around for awhile, as has .jpg and many other forms of lossy optical compression.

    You're right, though - a poorly lit, out of focus, shaky 16 megapixel image of breakfast is certainly overkill. It would be amusing if the algorithm included some AI that determined the "value" of the image and adjusted the compression levels accordingly.

    --
    🌻🌻 [google.com]
    • (Score: 1) by GDX on Saturday January 21 2017, @12:12AM

      by GDX (1950) on Saturday January 21 2017, @12:12AM (#456787)

      For me this algorithm is more similar to "spectral band replication" and "band folding" than to typical interpolation, where you recreate deleted/lost information from the information that you have and some cues, this is a step beyond Fraunhofer style of information theory.

      For example HE-AAC that uses SBR, the audio is resampled from 48kHz to 24kHz (this kills the audio signal from 12kHz to 24kHz) and compressed using AAC-LC and then the SBR data is added. During the decompression the audio is resampled from 24kHz to 48kHz and the SBR data is used to fake the missing audio signals in the 12-24kHz range

      • (Score: 2) by JoeMerchant on Saturday January 21 2017, @12:21AM

        by JoeMerchant (3937) on Saturday January 21 2017, @12:21AM (#456789)

        Fraunhofer style compression has been "out there" for what? Like 20 years in widespread usage? It's about time to take another step forward.

        --
        🌻🌻 [google.com]
  • (Score: 2) by gidds on Friday January 20 2017, @01:26PM

    by gidds (589) on Friday January 20 2017, @01:26PM (#456513)

    Exactly.

    And the real shame is that this sort of technology could be used in a real compression algorithm.

    AIUI, many compression algorithms are based around a predictor: code that can make the best guess possible as to what the next byte/word/unit will be, based on the ones it's had already.  Then, you encode the 'residual', the difference between the prediction and the actual value.  The better the predictor, the smaller the residuals — and the better they can be compressed using existing techniques.  (You can also apply lossy techniques to them, of course.)

    So if this sort of AI makes better guesses about the detail of the image, then it can be used to improve image compression without making up detail out of whole cloth just because it's the sort of thing that other images have.

    (Of course, if an ignorant amateur like me can come up with this idea, then I'm sure the experts have.  Though none of the reports I've read about this story suggest so.)

    --
    [sig redacted]