Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Sunday May 21 2017, @03:11PM   Printer-friendly
from the the-default-SN-color-is-pretty-close-to-Grass-Bat dept.

So if you've ever picked out paint, you know that every infinitesimally different shade of blue, beige, and gray has its own descriptive, attractive name. Tuscan sunrise, blushing pear, Tradewind, etc... There are in fact people who invent these names for a living. But given that the human eye can see millions of distinct colors, sooner or later we're going to run out of good names. Can AI help?

For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values) Could the neural network learn to invent new paint colors and give them attractive names?

The answer, not surprisingly, is no. But some of them are hilarious. My own personal favorites are Gray Pubic (close to aqua blue), Clardic Fug (brownish green), and Stanky Bean (inexplicably a rather nice dusty rose).

http://lewisandquark.tumblr.com/post/160776374467/new-paint-colors-invented-by-neural-network

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by looorg on Sunday May 21 2017, @04:55PM (7 children)

    by looorg (578) on Sunday May 21 2017, @04:55PM (#513065)

    If we can only see (or differentiate between) millions of colours shouldn't it be enough if we just gave them an RGB number? That should be enough for 256^3 colours. Not that I would be able to tell #FD98A1 from #FD98A2 by just looking at them, the only way I would be able to tell them apart would be by seeing the numbers.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Sunday May 21 2017, @05:18PM

    by Anonymous Coward on Sunday May 21 2017, @05:18PM (#513070)

    Foxtrot-Delta-Niner-Eight-Alpha-One is a pretty color.

  • (Score: 2) by Nuke on Sunday May 21 2017, @07:31PM

    by Nuke (3162) on Sunday May 21 2017, @07:31PM (#513122)

    In fact the "design" and fashion world uses the Pantone system [pantone.com]
    I heard it said that most RGB combinations look like different grades of mud. Never checked them all out myself.

  • (Score: 5, Interesting) by rleigh on Sunday May 21 2017, @08:12PM (4 children)

    by rleigh (4887) on Sunday May 21 2017, @08:12PM (#513137) Homepage

    No, RGB isn't anywhere near acceptable. You need to use a spectrophotometer to obtain the full excitation and emission spectra, so you can see exactly what proportion of each wavelength is absorbed or emitted across the full visible spectrum, extending into the UV and IR regions. Why outside the visible range? Because UV absorption can result in emission in the visible range, usually blue (see: whiteners in washing powder).

    You might be thinking this is way over the top. For a computer display it would be, because the display is limited to emission, plus some reflection of the ambient lighting. And each colour component has a defined emission spectrum with little crossover. But if you take fabric dye or printing inks as examples, these are chosen for their properties under specific sets of lighting conditions. The perceived colour changes dramatically when viewed under direct sunlight, mercury arc, LED, fluorescent lighting (with different gases) and incandescent lighting (with different filaments and gases, from plain tungsten to halogen etc.). For all these cases, the emission spectra of the light sources and the absorption spectra and emission spectra of the colour in the dye or ink define the behaviour of the colour. When I did a bit of work experience in the laboratory of a dye works, they had light boxes with many different light sources inside which could be rapidly switched between, so allow the differences to be directly contrasted. The lighting inside a clothing shop is deliberately chosen to show off the colours in a certain way and the dyes are specifically chosen to match the illumination in use. This may look quite different outside in sunlight. The customers would use Pantone or some other specific colour chart to specify the colour--the Pantone and other colour charts can be illuminated under the same conditions as the dyed product to verify the match is exactly as intended.

    When I worked as a lab tech in a brewery the colour of each product was clearly defined, and every batch was checked with a spectrophotometer to check its absorption at specific wavelengths. Again, not RGB!

    Also, RGB as a "scale" is poorly defined. Is the scale linear in terms of the emitted light intensity (photon flux) at defined R, G and B wavelengths? If so, what's the bandpass and falloff for each wavelength? Or is it linear on a perceptual scale (i.e. gamma corrected); if so, what's the gamma value for each channel? What about intensity at the endpoints of the scale? Since gamma is nonlinear you can't adjust the endpoints without distorting the scale. Or is a more complex colour profile involved? This all needs to be clearly defined, and when you get down to the details it turns out not to be simple at all.

    > Not that I would be able to tell #FD98A1 from #FD98A2

    You absolutely could if you tried. 8-bit colour images are ridden with artefacts. There are only 256 levels for each of the R, G and B components. You can see that difference. View or print out an 8-bit greyscale ramp, and you'll be able to distinguish every value change (high and low end depend upon the monitor or printer contrast) Open a paint program and fill the image with the first colour. Then create a big square or circle with the second colour. Your eyes will be able to distinguish the boundary between A1 and A2 without much, if any, effort. You need to go to higher bit depths to fix this. This is why for scientific imaging we go to 12- or 16-bits per channel routinely, and sometimes even to 32-bit or single- or double-precision floating point.

    • (Score: 0) by Anonymous Coward on Monday May 22 2017, @03:05PM (2 children)

      by Anonymous Coward on Monday May 22 2017, @03:05PM (#513524)

      RGB is a radical scale.

      That is, that the emitted light of each color is proportional to the square of its RGB value.

      • (Score: 2) by rleigh on Monday May 22 2017, @10:12PM (1 child)

        by rleigh (4887) on Monday May 22 2017, @10:12PM (#513807) Homepage

        This is not correct, though it's a crude approximation for some common cases. "RGB" does not in and of itself have a defined scale, it's simply a triplet of intensity values; the scale needs specifying in the metadata accompanying the pixel data and/or in the specifications for the display device input in order to correctly reproduce the intended effect.

        https://en.wikipedia.org/wiki/Gamma_correction [wikipedia.org]
        https://en.wikipedia.org/wiki/SRGB [wikipedia.org]

        Typical scale possibilities are (a) linear (b) gamma-encoded or (c) sRGB or Adobe-RGB encoded, which are specialised forms of gamma encoding for specific defined colour-spaces. Gamma encoding is a power function, and with typical constants it's around 2.2 in the normalised range (0,1), but not really a square.

        When it comes to specifications, none of this was really designed well. If you used a computer in the '80s or early '90s, the RGB intensities were simply voltages sent to the monitor; the display would have some intrinsic gamma and this would be what your pixel data encoded. This pixel data was implicitly set for the gamma curve of that specific display device; transfer it to some other system, and all bets were off. Until we got defined colour-spaces like sRGB, most RGB image data was not in a defined colour-space, and because e.g. PCs and Macs used different gamma constants, you couldn't correctly display an image created on one on the other.

        I work with scientific and medical images, and most of these use a linear colour-space. Most CCDs and other detectors report linear values. Most RGB data is 12- or 16-bit linear. One thing to bear in mind is that with 8-bit data we ended up using gamma encoding simply as a crude means of compression: by skewing the value distribution in the (0,255) range we could increase the *perceptual* dynamic range at the expense of reducing the precision at one end of the scale and increasing it at the other. It makes sense when you only have 256 values to play with. But now we have 16-bit images and 12-bit displays, there's no reason that pixel data can't be stored and processed directly as linear data, and downsampled and gamma-encoded if required, on the fly by your GPU before it gets sent to the display. Even that's not needed with displays that handle linear input.

        Other than history, there's little reason to use anything but linear scales today. The pipeline of acquisition/creation, processing and display can all be done entirely with linear scaling, and no need to gamma encode/decode at any step. It's greatly simpler, more accurate, and removes the transformation artefacts you'd get with low bit depths. While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

        • (Score: 0) by Anonymous Coward on Monday May 22 2017, @10:36PM

          by Anonymous Coward on Monday May 22 2017, @10:36PM (#513818)

          While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

          That depends. If you have a picture which represents a fairly smooth data but with 0.5% per-pixel sensor noise, an 8 bit representation will compress losslessly to about quarter of space compared to 12 bits.

    • (Score: 2) by JoeMerchant on Monday May 22 2017, @04:41PM

      by JoeMerchant (3937) on Monday May 22 2017, @04:41PM (#513574)

      First, yeah, what he said.

      Second, about perceiving 8 bit color differences: Yes, the eye can do it, easily, but only if the monitor renders it. Lots of consumer level LCD monitors only display 6 bits of color, especially in blue and red channels.

      --
      🌻🌻 [google.com]