Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday November 02 2016, @06:13PM   Printer-friendly
from the you-lookin'-at-ME? dept.

Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure.

In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.

http://qz.com/823820/carnegie-mellon-made-a-special-pair-of-glasses-that-lets-you-steal-a-digital-identity/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday November 02 2016, @08:14PM

    by Anonymous Coward on Wednesday November 02 2016, @08:14PM (#421817)

    "Up to 100%"? With no lower bound, that could be practically anything.

  • (Score: 2) by Capt. Obvious on Wednesday November 02 2016, @08:48PM

    by Capt. Obvious (6089) on Wednesday November 02 2016, @08:48PM (#421825)

    You should read the article. One pair of glasses had a 100% fooling rate (across implementations). Others had 87%, and others were in between. Note that the 87% had both gender-changing and specific identity features.

    That said, the 100% is really suspect because it was photoshopped, not a real picture. The 87% was real (and, as I said, emulating a specific person)

  • (Score: 0) by Anonymous Coward on Wednesday November 02 2016, @08:53PM

    by Anonymous Coward on Wednesday November 02 2016, @08:53PM (#421826)

    There goal for the lower bound on their dodging attack was 80%.
    The actual result in their testing was 91.67% - but that was just one experiment out of 8, the other 7 experiments had 100% success rate.