Computers, phones, and even online stores are starting to use your face as a password. But new research from Carnegie Mellon University shows that facial recognition software is far from secure.
In a paper (pdf) presented at a security conference on Oct. 28, researchers showed they could trick AI facial recognition systems into misidentifying faces—making someone caught on camera appear to be someone else, or even unrecognizable as human. With a special pair of eyeglass frames, the team forced commercial-grade facial recognition software into identifying the wrong person with up to 100% success rates.
(Score: 1) by rbanfield on Thursday November 03 2016, @05:27AM
What's interesting about this work is that it shows real examples of a real device being fooled. It is already a known problem for deep neural networks that access to the network structure and weights can allow images that are only slightly and imperceptibly modified to result in totally different and ridiculous changes. In this case they would have also had to go through the camera system to force the misidentification, which is pretty impressive.
I highly recommend watching some of the following video. Skip the first hour and two minutes, and tune in to the section "adversarial examples". It's really interesting to see what is required to turn a big school bus into an ostrich (hint: damn near nothing at all to the human eye).
https://www.youtube.com/watch?v=ASdbG_7KMhc [youtube.com]
(Score: 2) by TheLink on Thursday November 03 2016, @07:21PM
People and other animals make mistakes, but the mistakes are generally different. They wouldn't mistake a school bus for an ostrich.
FWIW I personally prefer that we head down the path of augmenting humans rather than replacing humans. The tech involved is similar (there could still be a place for neural networks etc), but the results could be different - e.g. the proportion of enslaved humans might be different :).