Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

Source: http://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @07:01PM

    by Anonymous Coward on Sunday August 06 2017, @07:01PM (#549599)

    Yes, the theme of "safety first" is a pretty popular one throughout history. You always have groups that want to push the envelope and try something new. You also have groups that worry about major changes and don't want to mess up society which has been working relatively well so far.

    No one is saying to stop self-driving tech, and with the huge inertia behind self-driving tech these studies are actually very important to keep us aware of the potential pitfalls. Personally I'm more worried about the massive shift in government opinion. The opinion on self-driving cars went from "maybe in 10-20 years" and "massive regulation / restriction" to "test it out here! Let's launch this shit YEEHAW!"

    I don't understand the rapid shift, but as with everything else where opinions radically change over a short time there is probably a boat load of money to be made. My guess is major car manufacturers will push for regulation to capture the market, then they'll corner the market on self-driving car services and it can become like the Telecom boom. Overcharge massively for transportation, implement surveillance tech, and with cell phones you now have a complete surveillance society.

    My opinion: self-driving cars should be self contained. They can utilize GPS and cell towers for location, but they should not be SENDING anything back. We've lost track of controlling our own lives and the objects we interact with. Soon we will all just be renting our lives from the ownership class, life will become even more based on wage slavery. Phew, that tangent snuck up on me!