Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:34PM (7 children)

    by Anonymous Coward on Sunday August 06 2017, @03:34PM (#549542)

    I guess you can't edit as anonymous coward, many sites have cookies that let you do so.

    Well, to continue on, since many vision sensors have difficulty with depth perception, lighting, odd patches, etc, through placing artifacts in not so difficult to find spots, you can distort things so after processing it ranks strongest in a row representing another object. In the look inside of what the image looks like after being processed by the network I posted above, the artifact would end up creating or erasing pixels in the pipeline that would carry through. These pixels can end up being significant portions of the image, especially if the network lowers the resolution for memory and speed purposes.

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:43PM

    by Anonymous Coward on Sunday August 06 2017, @03:43PM (#549545)

    Your post shows how it works. The original post does not show the how or the why it is failing. It comes down to speed. The higher resolution you use the bigger the net you need. Basically the point of the article shows that aliasing is being used heavily to guess what an object is to a particular probability. If your net has not learned about those noise perturbations then your net will fail. But adding in that data could cause your other confidence items to fall even though it was right then.

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @03:47PM (2 children)

    by Anonymous Coward on Sunday August 06 2017, @03:47PM (#549546)

    So basically your neural network is stupider than a horse. Have you tried putting a horse brain in a car? Seriously just make a horse-car cyborg already. It would be better than any of your self driving fake AI.

    • (Score: 1, Interesting) by Anonymous Coward on Sunday August 06 2017, @03:52PM

      by Anonymous Coward on Sunday August 06 2017, @03:52PM (#549548)

      A simulation with actual rat neurons piloting a fighter jet

      https://www.newscientist.com/article/dn6573-brain-cells-in-a-dish-fly-fighter-plane/ [newscientist.com]

    • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @06:45PM

      by Anonymous Coward on Sunday August 06 2017, @06:45PM (#549594)

      That is an intriguing idea, and you might even be able to sell it as virtual paradise for the horse brain!! When not active the brain is networked with others and they run around an unimaginable paradise.

      Lol, the Matrix but for self driving cars. Oblig: "Are we the baddies?"

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @04:05PM

    by Anonymous Coward on Sunday August 06 2017, @04:05PM (#549553)

    Thanks for the link, very interesting way to show how NN works.

    > I guess you can't edit as anonymous coward, ...

    From one AC to another -- given the nature of many discussions here, I personally think that "no editing of posts" is a good thing, too many posters would be tempted to re-write history. I Preview all but the shortest of my posts.

    There are cases like this where it would have made sense for you to continue your explanation, but sorting those from all the rest would be a job for a neural network(grin).

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:33PM

    by Anonymous Coward on Sunday August 06 2017, @05:33PM (#549579)

    I guess you can't edit as anonymous coward, many sites have cookies that let you do so.

    Nobody can edit their posts. We like it that way.

  • (Score: 2) by maxwell demon on Sunday August 06 2017, @05:45PM

    by maxwell demon (1608) on Sunday August 06 2017, @05:45PM (#549582) Journal

    I guess you can't edit as anonymous coward

    You can't edit after posting, even if logged in, and even as subscriber.

    --
    The Tao of math: The numbers you can count are not the real numbers.