With unlimited data plans becoming increasingly expensive, or subscribers being forced to ditch their unlimited data due to overuse, anything that can reduce the amount of data we download is welcome. This is especially true for media including images or video, and Google just delivered a major gain when it comes to viewing images online.
The clever scientists at Google Research have come up with a new technique for keeping image size to an absolute minimum without sacrificing quality. So good is this new technique that it promises to reduce the size of an image on disk by as much as 75 percent.
The new technique is called RAISR, which stands for "Rapid and Accurate Image Super-Resolution." Typically, reducing the size of an image means lowering its quality or resolution. RAISR works by taking a low-resolution image and upsampling it, which basically means enhancing the detail using filtering. Anyone who's ever tried to do this manually knows that the end result looks a little blurred. RAISR avoids that thanks to machine learning.
[...] RAISR has been trained using low and high quality versions of images. Machine learning allows the system to figure out the best filters to recreate the high quality image using only the low quality version. What you end up with after lots of training is a system that can do the same high quality upsampling on most images without needing the high quality version for reference.
-- submitted from IRC
(Score: 2) by JoeMerchant on Friday January 20 2017, @03:22AM
There's Shannon's information theory where bits are bits and you can only push so many bits per second through a channel with so much bandwidth.
Then there's the Fraunhofer style of information theory where some bits are more important than others, so preserve the ones that are perceived by people and discard those that won't be missed.
All this crap about "upscaling the image" is oversimplification to make a tech article that people think they understand in a 30 second skimming. There's something like upscaling going on in there, but if that's all that's going on in there, this wouldn't have been news 20 years ago... cubic spline interpolation has been around for awhile, as has .jpg and many other forms of lossy optical compression.
You're right, though - a poorly lit, out of focus, shaky 16 megapixel image of breakfast is certainly overkill. It would be amusing if the algorithm included some AI that determined the "value" of the image and adjusted the compression levels accordingly.
🌻🌻 [google.com]
(Score: 1) by GDX on Saturday January 21 2017, @12:12AM
For me this algorithm is more similar to "spectral band replication" and "band folding" than to typical interpolation, where you recreate deleted/lost information from the information that you have and some cues, this is a step beyond Fraunhofer style of information theory.
For example HE-AAC that uses SBR, the audio is resampled from 48kHz to 24kHz (this kills the audio signal from 12kHz to 24kHz) and compressed using AAC-LC and then the SBR data is added. During the decompression the audio is resampled from 24kHz to 48kHz and the SBR data is used to fake the missing audio signals in the 12-24kHz range
(Score: 2) by JoeMerchant on Saturday January 21 2017, @12:21AM
Fraunhofer style compression has been "out there" for what? Like 20 years in widespread usage? It's about time to take another step forward.
🌻🌻 [google.com]