Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by SomeGuy on Sunday August 06 2017, @07:04PM (1 child)

    by SomeGuy (5632) on Sunday August 06 2017, @07:04PM (#549601)

    A particular Department of Transportation once had the brilliant idea of using RFID tags to inventory their signs. They envisioned literally just driving around and automatically getting inventory information from RFID tags. The problem is you are putting the tag behind a *huge sheet of metal*. Since the techs had to actually get out and look at the signs anyway they eventually just went with barcodes.

    And laws never stopped anyone from steeling a sign. Oh, look. Billybob Redneck just added another I-20 sign to his collection.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday August 07 2017, @04:15AM

    by Anonymous Coward on Monday August 07 2017, @04:15AM (#549767)

    Out in the country around here, rednecks seem to use the signs for target practice. Even if the bullet missed, I wonder if any consumer-grade electronics could live through the shock-loading & vibration from a gunshot?