Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Sunday May 21 2017, @03:11PM   Printer-friendly
from the the-default-SN-color-is-pretty-close-to-Grass-Bat dept.

So if you've ever picked out paint, you know that every infinitesimally different shade of blue, beige, and gray has its own descriptive, attractive name. Tuscan sunrise, blushing pear, Tradewind, etc... There are in fact people who invent these names for a living. But given that the human eye can see millions of distinct colors, sooner or later we're going to run out of good names. Can AI help?

For this experiment, I gave the neural network a list of about 7,700 Sherwin-Williams paint colors along with their RGB values. (RGB = red, green, and blue color values) Could the neural network learn to invent new paint colors and give them attractive names?

The answer, not surprisingly, is no. But some of them are hilarious. My own personal favorites are Gray Pubic (close to aqua blue), Clardic Fug (brownish green), and Stanky Bean (inexplicably a rather nice dusty rose).

http://lewisandquark.tumblr.com/post/160776374467/new-paint-colors-invented-by-neural-network

-- submitted from IRC


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Funny) by idiot_king on Sunday May 21 2017, @03:32PM (4 children)

    by idiot_king (6587) on Sunday May 21 2017, @03:32PM (#513039)

    Some of those names are pretty funny and unexpected.
    But it makes me wonder if it works the opposite way, as well. As in, it sees a word, and "understands" its implication. As in reading between the lines, in a way.
    For example, "Rosy Red" implies a specific shade series of red. Clearly, it's a type of red. But "Rosy" might mean deeper, brighter, and so on. So if it could learn the two-way connotations between words and objects, perhaps it can be used as a powerful tool to fight against stuff like hate speech and far-right rhetoric online, i.e., "Fake News." For example, if it can understand the difference between a hit piece on Der Trumpenfurher versus a praise piece, then one of those can be filtered out and one can be let through. That would be really exciting to me. Automating the fight against fascism would be a truly virtuous use of technology.

    • (Score: 3, Insightful) by Anonymous Coward on Sunday May 21 2017, @04:34PM (2 children)

      by Anonymous Coward on Sunday May 21 2017, @04:34PM (#513053)

      The names are bullshit and just used for marketing purposes. If you really want to know what the color is, you'd need something like what they use for computers, a string of numbers to indicate how the colors are mixed or possibly the wavelength of the color.

      The names are mostly there for marketing purposes and quite frankly, there's too many of them already to be meaningful to anybody.

      • (Score: 1, Informative) by Anonymous Coward on Sunday May 21 2017, @04:43PM

        by Anonymous Coward on Sunday May 21 2017, @04:43PM (#513058)

        Coding a random word generator with a neural network is unnecessary overkill.
        Coding a random word generator and calling it an "AI" is trendy bullshit.

        This tweeting womyn blogger is all about the trendy marketing bullshit.

      • (Score: 2) by JoeMerchant on Monday May 22 2017, @04:38PM

        by JoeMerchant (3937) on Monday May 22 2017, @04:38PM (#513571)

        So, a color isn't so much of a single wavelength, but more of a spectrum of absorption and/or reflection intensities. There may be one "dominant" average wavelength, but green is definitely perceived differently from purple, even though the average wavelength of the blue and red peaks that make purple lies somewhere near monochromatic green. A flat, neutral density grey will appear different to the eye than a color that reflects a half dozen equal peaks and troughs across the visual spectrum. The first might be called "neutral density grey," while the latter might be called "gives me a weird headache when the sun shines on it grey."

        --
        🌻🌻 [google.com]
    • (Score: 2) by Reziac on Monday May 22 2017, @05:54AM

      by Reziac (2489) on Monday May 22 2017, @05:54AM (#513336) Homepage

      If you manage to grow a new color of iris, the Iris Society lets you name it. I had a volunteer iris bloom in unfortunate shades of brown and yellow, quite possibly the ugliest flower ever. I named it "Blood and Vomit".

      --
      And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 0, Informative) by Anonymous Coward on Sunday May 21 2017, @04:33PM

    by Anonymous Coward on Sunday May 21 2017, @04:33PM (#513052)

    Random Wavelength + Random Word = Grad Student Project

    Equation checks out.

    Student self-identifies as "she/her" gender.

    SJW checks out.

    Yep! This qualifies as News!!!!!!

  • (Score: 3, Insightful) by looorg on Sunday May 21 2017, @04:55PM (7 children)

    by looorg (578) on Sunday May 21 2017, @04:55PM (#513065)

    If we can only see (or differentiate between) millions of colours shouldn't it be enough if we just gave them an RGB number? That should be enough for 256^3 colours. Not that I would be able to tell #FD98A1 from #FD98A2 by just looking at them, the only way I would be able to tell them apart would be by seeing the numbers.

    • (Score: 0) by Anonymous Coward on Sunday May 21 2017, @05:18PM

      by Anonymous Coward on Sunday May 21 2017, @05:18PM (#513070)

      Foxtrot-Delta-Niner-Eight-Alpha-One is a pretty color.

    • (Score: 2) by Nuke on Sunday May 21 2017, @07:31PM

      by Nuke (3162) on Sunday May 21 2017, @07:31PM (#513122)

      In fact the "design" and fashion world uses the Pantone system [pantone.com]
      I heard it said that most RGB combinations look like different grades of mud. Never checked them all out myself.

    • (Score: 5, Interesting) by rleigh on Sunday May 21 2017, @08:12PM (4 children)

      by rleigh (4887) on Sunday May 21 2017, @08:12PM (#513137) Homepage

      No, RGB isn't anywhere near acceptable. You need to use a spectrophotometer to obtain the full excitation and emission spectra, so you can see exactly what proportion of each wavelength is absorbed or emitted across the full visible spectrum, extending into the UV and IR regions. Why outside the visible range? Because UV absorption can result in emission in the visible range, usually blue (see: whiteners in washing powder).

      You might be thinking this is way over the top. For a computer display it would be, because the display is limited to emission, plus some reflection of the ambient lighting. And each colour component has a defined emission spectrum with little crossover. But if you take fabric dye or printing inks as examples, these are chosen for their properties under specific sets of lighting conditions. The perceived colour changes dramatically when viewed under direct sunlight, mercury arc, LED, fluorescent lighting (with different gases) and incandescent lighting (with different filaments and gases, from plain tungsten to halogen etc.). For all these cases, the emission spectra of the light sources and the absorption spectra and emission spectra of the colour in the dye or ink define the behaviour of the colour. When I did a bit of work experience in the laboratory of a dye works, they had light boxes with many different light sources inside which could be rapidly switched between, so allow the differences to be directly contrasted. The lighting inside a clothing shop is deliberately chosen to show off the colours in a certain way and the dyes are specifically chosen to match the illumination in use. This may look quite different outside in sunlight. The customers would use Pantone or some other specific colour chart to specify the colour--the Pantone and other colour charts can be illuminated under the same conditions as the dyed product to verify the match is exactly as intended.

      When I worked as a lab tech in a brewery the colour of each product was clearly defined, and every batch was checked with a spectrophotometer to check its absorption at specific wavelengths. Again, not RGB!

      Also, RGB as a "scale" is poorly defined. Is the scale linear in terms of the emitted light intensity (photon flux) at defined R, G and B wavelengths? If so, what's the bandpass and falloff for each wavelength? Or is it linear on a perceptual scale (i.e. gamma corrected); if so, what's the gamma value for each channel? What about intensity at the endpoints of the scale? Since gamma is nonlinear you can't adjust the endpoints without distorting the scale. Or is a more complex colour profile involved? This all needs to be clearly defined, and when you get down to the details it turns out not to be simple at all.

      > Not that I would be able to tell #FD98A1 from #FD98A2

      You absolutely could if you tried. 8-bit colour images are ridden with artefacts. There are only 256 levels for each of the R, G and B components. You can see that difference. View or print out an 8-bit greyscale ramp, and you'll be able to distinguish every value change (high and low end depend upon the monitor or printer contrast) Open a paint program and fill the image with the first colour. Then create a big square or circle with the second colour. Your eyes will be able to distinguish the boundary between A1 and A2 without much, if any, effort. You need to go to higher bit depths to fix this. This is why for scientific imaging we go to 12- or 16-bits per channel routinely, and sometimes even to 32-bit or single- or double-precision floating point.

      • (Score: 0) by Anonymous Coward on Monday May 22 2017, @03:05PM (2 children)

        by Anonymous Coward on Monday May 22 2017, @03:05PM (#513524)

        RGB is a radical scale.

        That is, that the emitted light of each color is proportional to the square of its RGB value.

        • (Score: 2) by rleigh on Monday May 22 2017, @10:12PM (1 child)

          by rleigh (4887) on Monday May 22 2017, @10:12PM (#513807) Homepage

          This is not correct, though it's a crude approximation for some common cases. "RGB" does not in and of itself have a defined scale, it's simply a triplet of intensity values; the scale needs specifying in the metadata accompanying the pixel data and/or in the specifications for the display device input in order to correctly reproduce the intended effect.

          https://en.wikipedia.org/wiki/Gamma_correction [wikipedia.org]
          https://en.wikipedia.org/wiki/SRGB [wikipedia.org]

          Typical scale possibilities are (a) linear (b) gamma-encoded or (c) sRGB or Adobe-RGB encoded, which are specialised forms of gamma encoding for specific defined colour-spaces. Gamma encoding is a power function, and with typical constants it's around 2.2 in the normalised range (0,1), but not really a square.

          When it comes to specifications, none of this was really designed well. If you used a computer in the '80s or early '90s, the RGB intensities were simply voltages sent to the monitor; the display would have some intrinsic gamma and this would be what your pixel data encoded. This pixel data was implicitly set for the gamma curve of that specific display device; transfer it to some other system, and all bets were off. Until we got defined colour-spaces like sRGB, most RGB image data was not in a defined colour-space, and because e.g. PCs and Macs used different gamma constants, you couldn't correctly display an image created on one on the other.

          I work with scientific and medical images, and most of these use a linear colour-space. Most CCDs and other detectors report linear values. Most RGB data is 12- or 16-bit linear. One thing to bear in mind is that with 8-bit data we ended up using gamma encoding simply as a crude means of compression: by skewing the value distribution in the (0,255) range we could increase the *perceptual* dynamic range at the expense of reducing the precision at one end of the scale and increasing it at the other. It makes sense when you only have 256 values to play with. But now we have 16-bit images and 12-bit displays, there's no reason that pixel data can't be stored and processed directly as linear data, and downsampled and gamma-encoded if required, on the fly by your GPU before it gets sent to the display. Even that's not needed with displays that handle linear input.

          Other than history, there's little reason to use anything but linear scales today. The pipeline of acquisition/creation, processing and display can all be done entirely with linear scaling, and no need to gamma encode/decode at any step. It's greatly simpler, more accurate, and removes the transformation artefacts you'd get with low bit depths. While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

          • (Score: 0) by Anonymous Coward on Monday May 22 2017, @10:36PM

            by Anonymous Coward on Monday May 22 2017, @10:36PM (#513818)

            While one criticism would be that using e.g. 12 bits over 8 is wasteful, I'll just say that this type of data compresses losslessly really well, and doesn't take up that much extra space.

            That depends. If you have a picture which represents a fairly smooth data but with 0.5% per-pixel sensor noise, an 8 bit representation will compress losslessly to about quarter of space compared to 12 bits.

      • (Score: 2) by JoeMerchant on Monday May 22 2017, @04:41PM

        by JoeMerchant (3937) on Monday May 22 2017, @04:41PM (#513574)

        First, yeah, what he said.

        Second, about perceiving 8 bit color differences: Yes, the eye can do it, easily, but only if the monitor renders it. Lots of consumer level LCD monitors only display 6 bits of color, especially in blue and red channels.

        --
        🌻🌻 [google.com]
  • (Score: 2) by JoeMerchant on Monday May 22 2017, @02:19PM

    by JoeMerchant (3937) on Monday May 22 2017, @02:19PM (#513494)

    The answer is: this particular attempt failed.

    A favorite quote of mine: "Try, fail. Try again, fail again. Fail better next time."

    The neural network didn't have sufficient cultural contextual reference to do the job given to it. And, this, being a purely subjective topic, is nothing that can ever be 100% "correct." The best I would hope from a neural network (including one implemented in "wetware") is that it might spit out 5 possible names for a color, to be judged by a panel of 5 persons from varied demographics, and that one of those 5 possible names might be rated as at least "acceptable" by all 5 judges and as "good" by more than one.

    This would require the kind of backing data store that Big Blue used to win Jepopardy, and a very well refined filter.

    --
    🌻🌻 [google.com]
(1)