Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday January 19 2017, @08:14PM   Printer-friendly
from the reduce-the-size-of-your-pron-storage dept.

With unlimited data plans becoming increasingly expensive, or subscribers being forced to ditch their unlimited data due to overuse, anything that can reduce the amount of data we download is welcome. This is especially true for media including images or video, and Google just delivered a major gain when it comes to viewing images online.

The clever scientists at Google Research have come up with a new technique for keeping image size to an absolute minimum without sacrificing quality. So good is this new technique that it promises to reduce the size of an image on disk by as much as 75 percent.

The new technique is called RAISR, which stands for "Rapid and Accurate Image Super-Resolution." Typically, reducing the size of an image means lowering its quality or resolution. RAISR works by taking a low-resolution image and upsampling it, which basically means enhancing the detail using filtering. Anyone who's ever tried to do this manually knows that the end result looks a little blurred. RAISR avoids that thanks to machine learning.

[...] RAISR has been trained using low and high quality versions of images. Machine learning allows the system to figure out the best filters to recreate the high quality image using only the low quality version. What you end up with after lots of training is a system that can do the same high quality upsampling on most images without needing the high quality version for reference.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 20 2017, @05:36PM

    by Anonymous Coward on Friday January 20 2017, @05:36PM (#456619)

    I've been kicking around the idea of image compression by using GA's (genetic algorithm) to evolve the most compact representation (with the lossy tolerance level set by user). Polygons, ovals, and maybe wave equations for repeating-but-varying polygon fill-ins (think Moiré or fractals), and blur filters can be combined in different orders and positions. The GA would find the best ordering and combo's.

    Here are examples where a single kind of shape type is used:
    https://www.youtube.com/watch?v=25aXHBZFPgU [youtube.com]
    https://www.youtube.com/watch?v=GCmMRUIGIwQ [youtube.com]

    But I'm thinking of having more object and transformation types.

    It would be computationally intense to generate the "render list" of polygons and transformations for a given image, but relatively quick to render. The GA does all the hard work so that the renderer doesn't have to.

    Create a kind of render machine language of generation primitives. Let's say we limit each object to 6 parameters (some may be ignored in some instructions). A render list would then resemble:

    Sequence, Instruction, Parameters...
    006 pencolor  39.9 34.1 39.1 00.0 00.0 00.0
    007 drawpoly  48.1 03.4 12.3 22.0 18.9 17.2
    008 blurlevel 34.0 23.0 05.0 07.0 00.0 00.0  // x and y "spread", & boundary cutoff
    009 blurpoly  28.1 03.4 12.3 27.0 18.9 13.2
    010 repoffset 20.1 33.0 89.2 93.0 44.9 99.2  // repeat prior*

    By having a consistent instruction format, GA cross-breeding is easier. We can cross-breed on both instruction list level, and the parameter level. The actual primitives (instructions) used may require some experiments to see what works best.

    It would be cool to watch the renderer reconstruct the image in slow motion by applying the operations one at a time. (Not the evolution steps, that takes too long to watch, just the final render list.) Even if it turned out to be a poor compression technique, the slow-mo rendering itself may be an entertaining use.

    I suspect this kind of gizmo would work best on images with a degree of repetition in them, such as buildings, and worse on those with a lot of randomness, like a jungle. Maybe the GA can mix traditional compression and this new kind, evolving the best combo for different images or portion of images.

    * There are different ways to arrange repeat instruction parameters. Perhaps the first 2 params could be the x and y offset, the second 2 be the delta on the first 2 per "loop", and the last two be the count of repeats for x and y respectively. The repeater(s) would apply to the prior instruction, be it a drawing or blurring instruction.