Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday November 02 2016, @06:13PM   Printer-friendly
from the you-lookin'-at-ME? dept.

Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure.

In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.

http://qz.com/823820/carnegie-mellon-made-a-special-pair-of-glasses-that-lets-you-steal-a-digital-identity/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by rbanfield on Thursday November 03 2016, @05:27AM

    by rbanfield (818) on Thursday November 03 2016, @05:27AM (#421938)

    What's interesting about this work is that it shows real examples of a real device being fooled. It is already a known problem for deep neural networks that access to the network structure and weights can allow images that are only slightly and imperceptibly modified to result in totally different and ridiculous changes. In this case they would have also had to go through the camera system to force the misidentification, which is pretty impressive.

    I highly recommend watching some of the following video. Skip the first hour and two minutes, and tune in to the section "adversarial examples". It's really interesting to see what is required to turn a big school bus into an ostrich (hint: damn near nothing at all to the human eye).

    https://www.youtube.com/watch?v=ASdbG_7KMhc [youtube.com]

  • (Score: 2) by TheLink on Thursday November 03 2016, @07:21PM

    by TheLink (332) on Thursday November 03 2016, @07:21PM (#422205) Journal
    You can see from the mistakes they make that most modern AIs don't actually understand stuff. And that includes IBM's Watson.

    People and other animals make mistakes, but the mistakes are generally different. They wouldn't mistake a school bus for an ostrich.

    FWIW I personally prefer that we head down the path of augmenting humans rather than replacing humans. The tech involved is similar (there could still be a place for neural networks etc), but the results could be different - e.g. the proportion of enslaved humans might be different :).