Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Sunday May 21 2017, @03:11PM   Printer-friendly
from the the-default-SN-color-is-pretty-close-to-Grass-Bat dept.

So if you've ever picked out paint, you know that every infinitesimally different shade of blue, beige, and gray has its own descriptive, attractive name. Tuscan sunrise, blushing pear, Tradewind, etc... There are in fact people who invent these names for a living. But given that the human eye can see millions of distinct colors, sooner or later we're going to run out of good names. Can AI help?

For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values) Could the neural network learn to invent new paint colors and give them attractive names?

The answer, not surprisingly, is no. But some of them are hilarious. My own personal favorites are Gray Pubic (close to aqua blue), Clardic Fug (brownish green), and Stanky Bean (inexplicably a rather nice dusty rose).

http://lewisandquark.tumblr.com/post/160776374467/new-paint-colors-invented-by-neural-network

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by rleigh on Monday May 22 2017, @10:12PM (1 child)

    by rleigh (4887) on Monday May 22 2017, @10:12PM (#513807) Homepage

    This is not correct, though it's a crude approximation for some common cases. "RGB" does not in and of itself have a defined scale, it's simply a triplet of intensity values; the scale needs specifying in the metadata accompanying the pixel data and/or in the specifications for the display device input in order to correctly reproduce the intended effect.

    https://en.wikipedia.org/wiki/Gamma_correction [wikipedia.org]
    https://en.wikipedia.org/wiki/SRGB [wikipedia.org]

    Typical scale possibilities are (a) linear (b) gamma-encoded or (c) sRGB or Adobe-RGB encoded, which are specialised forms of gamma encoding for specific defined colour-spaces. Gamma encoding is a power function, and with typical constants it's around 2.2 in the normalised range (0,1), but not really a square.

    When it comes to specifications, none of this was really designed well. If you used a computer in the '80s or early '90s, the RGB intensities were simply voltages sent to the monitor; the display would have some intrinsic gamma and this would be what your pixel data encoded. This pixel data was implicitly set for the gamma curve of that specific display device; transfer it to some other system, and all bets were off. Until we got defined colour-spaces like sRGB, most RGB image data was not in a defined colour-space, and because e.g. PCs and Macs used different gamma constants, you couldn't correctly display an image created on one on the other.

    I work with scientific and medical images, and most of these use a linear colour-space. Most CCDs and other detectors report linear values. Most RGB data is 12- or 16-bit linear. One thing to bear in mind is that with 8-bit data we ended up using gamma encoding simply as a crude means of compression: by skewing the value distribution in the (0,255) range we could increase the *perceptual* dynamic range at the expense of reducing the precision at one end of the scale and increasing it at the other. It makes sense when you only have 256 values to play with. But now we have 16-bit images and 12-bit displays, there's no reason that pixel data can't be stored and processed directly as linear data, and downsampled and gamma-encoded if required, on the fly by your GPU before it gets sent to the display. Even that's not needed with displays that handle linear input.

    Other than history, there's little reason to use anything but linear scales today. The pipeline of acquisition/creation, processing and display can all be done entirely with linear scaling, and no need to gamma encode/decode at any step. It's greatly simpler, more accurate, and removes the transformation artefacts you'd get with low bit depths. While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday May 22 2017, @10:36PM

    by Anonymous Coward on Monday May 22 2017, @10:36PM (#513818)

    While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

    That depends. If you have a picture which represents a fairly smooth data but with 0.5% per-pixel sensor noise, an 8 bit representation will compress losslessly to about quarter of space compared to 12 bits.