The American Civil Liberties Union, in an effort to demonstrate the dangers of face recognition technology, ran photos of members of Congress against a database of mug shots using Amazon Rekognition software. That test incorrectly identified 28 legislators as criminals (cue the jokes - yes, the Congress members were confirmed to be elsewhere at the time). They hope that demonstrating that this risk hits close to home will get Congress more interested in regulating the use of this technology.
The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance.
[...] If law enforcement is using Amazon Rekognition, it’s not hard to imagine a police officer getting a “match” indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins. Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.
(Score: 2) by The Shire on Monday July 30 2018, @01:24AM
When the confidence of a neural net is set to 80% I can promise you the false positive rate is MUCH higher than 5%. Again, they're using this tool incorrectly. It's a filter not a pointer. It returns a large set of possible matches that a human must then sift through and decide which if any is an actual match. No one out there is using this for traffic stops expecting it to be a positive ID. The ACLU is making claims that a system is failing to do what it was never designed or intended to do.
If you give me a hammer and I then run around smashing car windows, you don't blame the hammer for being destructive, you blame the user for using the tool incorrectly. That's the case here as well. This software is not intended to positively ID anyone, nor is it offered as something that does so.