Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by patrick on Sunday August 06 2017, @11:47PM (3 children)

    by patrick (3990) on Sunday August 06 2017, @11:47PM (#549689)

    ED-209: [menacingly] Please put down your weapon. You have twenty seconds to comply.

    Dick Jones: I think you'd better do what he says, Mr. Kinney.

    [Mr. Kinney drops the pistol on the floor. ED-209 advances, growling]

    ED-209: You now have fifteen seconds to comply.

    [Mr. Kinney turns to Dick Jones, who looks nervous]

    ED-209: You are in direct violation of Penal Code 1.13, Section 9.

    [entire room of people in full panic trying to stay out of the line of fire, especially Mr. Kinney]

    ED-209: You have five seconds to comply.

    Kinney: Help...! Help me!

    ED-209: Four... three... two... one... I am now authorized to use physical force!

    [ED-209 opens fire]

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Informative) by Anonymous Coward on Monday August 07 2017, @05:17AM (2 children)

    by Anonymous Coward on Monday August 07 2017, @05:17AM (#549779)

    The scenarios aren't comparable. The example you mention is an AI failing to grasp it's most fundamental and basic training. The issue here is adversarial attacks. They are intentionally using knowledge of how AI vision systems work to try to create scenarios where they fail to recognize something correctly. So for the Robocop example it would be more like if Mr. Kinney was sitting there smugly holding an L shaped piece of cardboard in his hand which had some near imperceptible visual modifications made to to it intentionally designed to make it appear to be a gun to AI learning systems.

    • (Score: 1, Interesting) by Anonymous Coward on Monday August 07 2017, @09:31AM

      by Anonymous Coward on Monday August 07 2017, @09:31AM (#549839)

      They are comparable if Mr Kinney happened to wear clothing with a pattern that the AI vision thought was a weapon for some unknown reason (unknown at that time). But it'll be fixed in the next release of course. Meanwhile too bad about Mr Kinney.

      Anyway whatever it is such robots might still be less trigger happy than the average US cop. After all in the USA they sack cops who don't shoot: http://www.npr.org/2016/12/08/504718239/military-trained-police-may-be-slower-to-shoot-but-that-got-this-vet-fired [npr.org]

      p.s. You definitely are doing things wrong if your average cop is less trigger happy than a US soldier. US soldiers aren't famous for their restraint.

    • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @06:35AM

      by Anonymous Coward on Tuesday August 08 2017, @06:35AM (#550476)

      Well, maybe Kinney had nearly-invisible paint dots added to his forehead that were recognized as a gun?