Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 10 2018, @04:34PM   Printer-friendly
from the do-you-see-what-I-see? dept.

Image recognition technology may be sophisticated, but it is also easily duped. Researchers have fooled algorithms into confusing two skiers for a dog, a baseball for espresso, and a turtle for a rifle. But a new method of deceiving the machines is simple and far-reaching, involving just a humble sticker.

Google researchers developed a psychedelic sticker that, when placed in an unrelated image, tricks deep learning systems into classifying the image as a toaster. According to a recently submitted research paper about the attack, this adversarial patch is "scene-independent," meaning someone could deploy it "without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene." It's also easily accessible, given it can be shared and printed from the internet.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Thursday January 11 2018, @11:08AM (2 children)

    by TheRaven (270) on Thursday January 11 2018, @11:08AM (#620878) Journal
    Once upon a time, people wrote software. They wrote this software for use on non-connected systems, and assumed all data was trustworthy. Later, they learned that it was important to consider an adversary in design of their systems and at least some software became more secure. Then people came up with complex ways of making decisions based on correlations. They assumed that their data was always trustworthy. Eventually, they will figure out that you have to design with an adversary in mind. Unfortunately, this is probably impossible for most current machine learning techniques, because to understand how to counter an adversary you have to actually understand the problem that you're trying to solve, and if you understand the problem that you're trying to solve then machine learning is not the right tool for the job.
    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Wootery on Friday January 12 2018, @01:30PM (1 child)

    by Wootery (2341) on Friday January 12 2018, @01:30PM (#621357)

    Two ideas spring to mind:

    • For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.
    • I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security.
    • (Score: 3, Insightful) by TheRaven on Saturday January 13 2018, @03:17PM

      by TheRaven (270) on Saturday January 13 2018, @03:17PM (#621818) Journal

      For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.

      That doesn't really help, because it assumes non-malicious mislabelling. It's analogous to error correction: ECC will protect you against all of the bit flips that are likely to occur accidentally, but if an attacker can flip a few bits intelligently then they can get past it.

      I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security

      That''s more likely, but it's very computationally expensive (even by machine-learning standards) and it has the same problem: an intelligent adversary is unlikely to pick the same possible variations as something that is not intelligently directed. Any machine learning approach gives you an approximation - the techniques are inherently unsuitable for producing anything else - and an intelligent adversary will always be able to find places where an approximation is wrong.

      --
      sudo mod me up