Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Google Brain Imaging Technique Creates Detail Out of Tiny Pixelated Images

Accepted submission by takyon at 2017-02-08 19:57:10
Software

Google has developed a neural network algorithm that can create reasonable approximations of a 32×32 image [arstechnica.com] from a downsized 8×8 image:

Of course, as we all know, it's impossible to create more detail than there is in the source image—so how does Google Brain do it? With a clever combination of two neural networks. The first part, the conditioning network, tries to map the the 8×8 source image against other high resolution images. It downsizes other high-res images to 8×8 and tries to make a match.

The second part, the prior network, uses an implementation of PixelCNN [github.com] to try and add realistic high-resolution details to the 8×8 source image. Basically, the prior network ingests a large number of high-res real images—of celebrities and bedrooms in this case. Then, when the source image is upscaled, it tries to add new pixels that match what it "knows" about that class of image. For example, if there's a brown pixel towards the top of the image, the prior network might identify that as an eyebrow: so, when the image is scaled up, it might fill in the gaps with an eyebrow-shaped collection of brown pixels. To create the final super-resolution image, the outputs from the two neural networks are mashed together. The end result usually contains the plausible addition of new details.

Pixel Recursive Super Resolution [arxiv.org]


Original Submission