Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday January 10 2018, @04:34PM   Printer-friendly
from the do-you-see-what-I-see? dept.

Image recognition technology may be sophisticated, but it is also easily duped. Researchers have fooled algorithms into confusing two skiers for a dog, a baseball for espresso, and a turtle for a rifle. But a new method of deceiving the machines is simple and far-reaching, involving just a humble sticker.

Google researchers developed a psychedelic sticker that, when placed in an unrelated image, tricks deep learning systems into classifying the image as a toaster. According to a recently submitted research paper about the attack, this adversarial patch is "scene-independent," meaning someone could deploy it "without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene." It's also easily accessible, given it can be shared and printed from the internet.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday January 10 2018, @05:08PM (7 children)

    by Anonymous Coward on Wednesday January 10 2018, @05:08PM (#620521)

    From https://gizmodo.com/this-simple-sticker-can-trick-neural-networks-into-thin-1821735479 [gizmodo.com]

    Most notably, with the roll-out of self-driving cars. These machines rely on image recognition software to understand and interact with their surroundings. Things could get dangerous if thousands of pounds of metal rolling down the highway can only see toasters.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @05:19PM (2 children)

    by Anonymous Coward on Wednesday January 10 2018, @05:19PM (#620525)

    Makes me wonder. Will clothing and fashion accessories that defeat facial recognition become illegal because they could lead to death for occupants of autonomous vehicles?

    Does that meant that machine vision is still not ready for self-driving cars?

    Moreover, do we want to imagine a world where machine vision is ready for self-driving cars and cannot be fooled by clever clothing and fashion accessories?

    • (Score: 2) by bob_super on Wednesday January 10 2018, @05:28PM

      by bob_super (1357) on Wednesday January 10 2018, @05:28PM (#620531)

      "We need those self-driving cars, because they save lives! Therefore, all clothing is now banned!"
      CA: Fine, man!
      ND, WY: No self-driving cars!
      FL: Self-driving vehicles allowed, but not near retired communities...

    • (Score: 0) by Anonymous Coward on Wednesday January 10 2018, @05:51PM

      by Anonymous Coward on Wednesday January 10 2018, @05:51PM (#620542)

      > Does that meant that machine vision is still not ready for self-driving cars?

      All the might of the tech industry, brought to its knees by graffiti artists.

  • (Score: 2) by HiThere on Wednesday January 10 2018, @06:20PM

    by HiThere (866) Subscriber Badge on Wednesday January 10 2018, @06:20PM (#620551) Journal

    And that's why it's important that this stuff be done *now*, so the algorithms can be hardened before the cars become common.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by TheRaven on Thursday January 11 2018, @11:08AM (2 children)

    by TheRaven (270) on Thursday January 11 2018, @11:08AM (#620878) Journal
    Once upon a time, people wrote software. They wrote this software for use on non-connected systems, and assumed all data was trustworthy. Later, they learned that it was important to consider an adversary in design of their systems and at least some software became more secure. Then people came up with complex ways of making decisions based on correlations. They assumed that their data was always trustworthy. Eventually, they will figure out that you have to design with an adversary in mind. Unfortunately, this is probably impossible for most current machine learning techniques, because to understand how to counter an adversary you have to actually understand the problem that you're trying to solve, and if you understand the problem that you're trying to solve then machine learning is not the right tool for the job.
    --
    sudo mod me up
    • (Score: 2) by Wootery on Friday January 12 2018, @01:30PM (1 child)

      by Wootery (2341) on Friday January 12 2018, @01:30PM (#621357)

      Two ideas spring to mind:

      • For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.
      • I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security.
      • (Score: 3, Insightful) by TheRaven on Saturday January 13 2018, @03:17PM

        by TheRaven (270) on Saturday January 13 2018, @03:17PM (#621818) Journal

        For some ML algorithms, there exist more robust variations. For instance, Robustboost is a variant of the Adaboost algorithm, which is far less sensitive to incorrectly labeled datapoints in the training set. I don't know if this has any bearing on the security question.

        That doesn't really help, because it assumes non-malicious mislabelling. It's analogous to error correction: ECC will protect you against all of the bit flips that are likely to occur accidentally, but if an attacker can flip a few bits intelligently then they can get past it.

        I imagine some sort of 'adversarial quasi-self-play' (as it were) could be used to train for better security

        That''s more likely, but it's very computationally expensive (even by machine-learning standards) and it has the same problem: an intelligent adversary is unlikely to pick the same possible variations as something that is not intelligently directed. Any machine learning approach gives you an approximation - the techniques are inherently unsuitable for producing anything else - and an intelligent adversary will always be able to find places where an approximation is wrong.

        --
        sudo mod me up