Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by janrinok on Wednesday January 10 2018, @04:34PM   Printer-friendly
from the do-you-see-what-I-see? dept.

Image recognition technology may be sophisticated, but it is also easily duped. Researchers have fooled algorithms into confusing two skiers for a dog, a baseball for espresso, and a turtle for a rifle. But a new method of deceiving the machines is simple and far-reaching, involving just a humble sticker.

Google researchers developed a psychedelic sticker that, when placed in an unrelated image, tricks deep learning systems into classifying the image as a toaster. According to a recently submitted research paper about the attack, this adversarial patch is "scene-independent," meaning someone could deploy it "without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene." It's also easily accessible, given it can be shared and printed from the internet.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Wootery on Friday January 12 2018, @01:30PM (1 child)

    by Wootery (2341) on Friday January 12 2018, @01:30PM (#621357)

    Two ideas spring to mind:

    • For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.
    • I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Insightful) by TheRaven on Saturday January 13 2018, @03:17PM

    by TheRaven (270) on Saturday January 13 2018, @03:17PM (#621818) Journal

    For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.

    That doesn't really help, because it assumes non-malicious mislabelling. It's analogous to error correction: ECC will protect you against all of the bit flips that are likely to occur accidentally, but if an attacker can flip a few bits intelligently then they can get past it.

    I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security

    That''s more likely, but it's very computationally expensive (even by machine-learning standards) and it has the same problem: an intelligent adversary is unlikely to pick the same possible variations as something that is not intelligently directed. Any machine learning approach gives you an approximation - the techniques are inherently unsuitable for producing anything else - and an intelligent adversary will always be able to find places where an approximation is wrong.

    --
    sudo mod me up