Slash Boxes

SoylentNews is people

posted by Fnord666 on Sunday August 06 2017, @01:48PM   Printer-friendly
from the fool-me-once dept.

Submitted via IRC for Bytram

It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren't in their training set.

This works pretty well, but the common features that machine learning algorithms come up with generally are not "red octagons with the letters S-T-O-P on them." Rather, they're looking [at] features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.


OpenAI has a captivating and somewhat frightening background article: Attacking Machine Learning with Adversarial Examples.

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @05:33PM (1 child)

    by Anonymous Coward on Sunday August 06 2017, @05:33PM (#549580)

    It seems to be a recurring theme in society today that we look for reasons not to do something. Yet take nearly anything today and it can be destroyed, thwarted, or worse in absurdly trivial ways.

      - You can turn cars into death traps with little more than a well placed snip with a pair of cutters.
      - Cut tires can cause substantial damage to a car and render inoperable (without even more damage) in just a few seconds.
      - A well placed squirt of super glue in a lock can not only completely lock somebody outside of their house but require extensive work to remedy, up to and including the complete removal of the lock.

    And you can get far more catastrophic there as well. Imagine if today somebody, for the first time, suggested building electric utility poles. We'll string countless wooden poles all over the country and then connecting them with extremely high voltage lines. Those lines can kill anything that completes a circuit and we understand that those poles/lines will come down in extreme weather (or other events) possibly starting fires, electrocuting things, and so on. If we had our social zeitgeist today a couple hundred years ago we likely still would not have national electric.

    This attack is simply not relevant. It would be incredibly visible, incredibly simple to counter-act if it did become an issue, and also requires substantial technological sophistication on the part of the attackers. And on top of this the ultimate effect would likely be mostly negligible. It could contribute to an accident of, for instance, a stop sign was removed but that's a 'could' on top of an 'if'. Heck, even then - that's no different than some kid stealing the sign because reasons.

  • (Score: 0) by Anonymous Coward on Sunday August 06 2017, @07:01PM

    by Anonymous Coward on Sunday August 06 2017, @07:01PM (#549599)

    Yes, the theme of "safety first" is a pretty popular one throughout history. You always have groups that want to push the envelope and try something new. You also have groups that worry about major changes and don't want to mess up society which has been working relatively well so far.

    No one is saying to stop self-driving tech, and with the huge inertia behind self-driving tech these studies are actually very important to keep us aware of the potential pitfalls. Personally I'm more worried about the massive shift in government opinion. The opinion on self-driving cars went from "maybe in 10-20 years" and "massive regulation / restriction" to "test it out here! Let's launch this shit YEEHAW!"

    I don't understand the rapid shift, but as with everything else where opinions radically change over a short time there is probably a boat load of money to be made. My guess is major car manufacturers will push for regulation to capture the market, then they'll corner the market on self-driving car services and it can become like the Telecom boom. Overcharge massively for transportation, implement surveillance tech, and with cell phones you now have a complete surveillance society.

    My opinion: self-driving cars should be self contained. They can utilize GPS and cell towers for location, but they should not be SENDING anything back. We've lost track of controlling our own lives and the objects we interact with. Soon we will all just be renting our lives from the ownership class, life will become even more based on wage slavery. Phew, that tangent snuck up on me!