Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:44PM (3 children)

    by Anonymous Coward on Sunday August 06 2017, @05:44PM (#549581)

    Once the cars become more common it should be a lot easier to figure out what will fool a given type of classifier.

    This will remain so as long as "training" methods do not include adversarial stuff and do not actually train the classifier on what are the minimum required features that a stop sign will have. Seems to me the classifiers are triggering on stuff that's common to a given set of stop signs but not necessarily related to what actually is a stop sign.

    tldr; many AI people are merely at the "Alchemy" stage and don't actually know what they are doing. They are "throwing stuff into a pot" till it seems to work well enough.

    The human laws need to be changed before we can have robot vehicles.
    1) Drivers have more responsibilities than merely driving. In some places the drivers have to ensure that minors are wearing seatbelts and similar stuff.
    2) If an AI car keeps failing badly on some edge cases and it's not provably due to an adversarial attack, who gets their driving license revoked? All cars of that revision? The manufacturer?

  • (Score: 2) by FakeBeldin on Monday August 07 2017, @07:42AM (2 children)

    by FakeBeldin (3360) on Monday August 07 2017, @07:42AM (#549817) Journal

    Interesting comment!

    tldr; many AI people are merely at the "Alchemy" stage and don't actually know what they are doing. They are "throwing stuff into a pot" till it seems to work well enough.

    This is part of the strength of Tesla. Their "pot" is becoming larger and larger with every mile anyone drives in any Tesla - so if any training set has actual stop signs in there, theirs ought to.

    I think that that might be one way out of this conundrum: gather so much real-life driving data, that any algorithm that trains itself on matching that data performs as well as humans would.
    But that doesn't work in a lab setting - you need hundreds of thousands of miles of actual driving data. Google has a few million, Tesla has more. That might be sufficient.

    • (Score: 0) by Anonymous Coward on Monday August 07 2017, @09:23AM (1 child)

      by Anonymous Coward on Monday August 07 2017, @09:23AM (#549836)
      Yep basically in theory it should eventually work better than human. It may fail in unexpected edge or "stupid by human standards" cases but those failures will become rarer and rarer (e.g. retrieve the crash data and retrain so it won't happen again).

      The thing is, if we stick to this approach to AI it would still be a dead end, because there really is no actual understanding by the AI. It's just a fancier form of brute-forcing.

      Even a crow will understand the world better with its walnut-sized brain consuming fractions of a watt.
      • (Score: 2) by FakeBeldin on Monday August 07 2017, @01:16PM

        by FakeBeldin (3360) on Monday August 07 2017, @01:16PM (#549912) Journal

        Well, once we've pushed machine learning algorithms to the brink, then we will start to investigate why this brink exists.
        But for now, more (training data / hidden layers / processing time / ...) still provides easily-attainable improvements.

        Basically: once we're getting closer to optimal performance of the technique, then we'll start looking into minimizing the technique.

        Can't wait :)