Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 10 2018, @04:34PM   Printer-friendly
from the do-you-see-what-I-see? dept.

Image recognition technology may be sophisticated, but it is also easily duped. Researchers have fooled algorithms into confusing two skiers for a dog, a baseball for espresso, and a turtle for a rifle. But a new method of deceiving the machines is simple and far-reaching, involving just a humble sticker.

Google researchers developed a psychedelic sticker that, when placed in an unrelated image, tricks deep learning systems into classifying the image as a toaster. According to a recently submitted research paper about the attack, this adversarial patch is "scene-independent," meaning someone could deploy it "without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene." It's also easily accessible, given it can be shared and printed from the internet.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @06:38PM (2 children)

    by Anonymous Coward on Wednesday January 10 2018, @06:38PM (#620559)

    How do they generalize that the sticker has the same affect on all deep-learning image recognition algorithms? I highly doubt ones trained specifically to recognize bananas, and nothing else, could suddenly call it a toaster.

  • (Score: 2) by sgleysti on Thursday January 11 2018, @03:11AM (1 child)

    by sgleysti (56) Subscriber Badge on Thursday January 11 2018, @03:11AM (#620784)

    They demonstrate a general method to create a patch that will trick a classification algorithm, but any patch created by this method is only likely to work against the algorithm it was designed to trick. Furthermore, they had better success when they used knowledge of the inner workings of the classification algorithm to design the patch. They still had fair success when only using the classification algorithm as a black box to design the patch.