Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday June 30 2020, @08:02AM   Printer-friendly
from the misidentified? dept.

AWS Facial Recognition Platform Misidentified Over 100 Politicians As Criminals:

Comparitech's Paul Bischoff found that Amazon's facial recognition platform misidentified an alarming number of people, and was racially biased.

Facial recognition technology is still misidentifying people at an alarming rate – even as it's being used by police departments to make arrests. In fact, Paul Bischoff, consumer privacy expert with Comparitech, found that Amazon's face recognition platform incorrectly misidentified more than 100 photos of US and UK lawmakers as criminals.

Rekognition, Amazon's cloud-based facial recognition platform that was first launched in 2016, has been sold and used by a number of United States government agencies, including ICE and Orlando, Florida police, as well as private entities. In comparing photos of a total of 1,959 US and UK lawmakers to subjects in an arrest database, Bischoff found that Rekognition misidentified at average of 32 members of Congress. That's four more than a similar experiment conducted by the American Civil Liberties Union (ACLU) – two years ago. Bischoff also found that the platform was racially biased, misidentifying non-white people at a higher rate than white people.

These findings have disturbing real-life implications. Last week, the ACLU shed light on Detroit citizen Robert Julian-Borchak Williams, who was arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter.

The incident sparked lawmakers last week to propose legislation that would indefinitely ban the use of facial recognition technology by law enforcement nationwide. Though Amazon previously had sold its technology to police departments, the tech giant recently placed a law enforcement moratorium on facial recognition (Microsoft and IBM did the same). But Bischoff says society still has a ways to go in figuring out how to correctly utilize facial recognition in a way that complies with privacy, consent and data security.

Previously:

(2020-06-28) Nationwide Facial Recognition Ban Proposed by Lawmakers
(2020-06-11) Amazon Bans Police From Using its Facial Recognition Software for One Year
(2020-06-10) Senator Fears Clearview AI Facial Recognition Use on Protesters
(2020-06-09) IBM Will No Longer Offer, Develop, or Research Facial Recognition Technology
(2020-05-08) Clearview AI to Stop Selling Controversial Facial Recognition App to Private Companies
(2020-05-08) How Well Can Algorithms Recognize Your Masked Face?
(2020-04-18) Some Shirts Hide You from Cameras
(2020-04-02) Microsoft Supports Some Facial Recognition Software
(2020-03-23) Here's What Facebook's Internal Facial Recognition App Looked Like
(2020-03-21) How China Built Facial Recognition for People Wearing Masks
(2020-03-13) Vermont Sues Clearview, Alleging 'Oppressive, Unscrupulous' Practices
(2020-02-28) Clearview AI's Facial Recognition Tech is Being Used by US Justice Department, ICE, and the FBI
(2020-02-24) Canadian Privacy Commissioners to Investigate "Creepy" Facial Recognition Firm Clearview AI
(2020-02-06) Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection
(2020-01-30) Facebook Pays $550M to Settle Facial Recognition Privacy Lawsuit
(2020-01-29) London to Deploy Live Facial Recognition to Find Wanted Faces in a Crowd
(2020-01-22) Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says
(2020-01-20) Google and Alphabet CEO Sundar Pichai Calls for AI Regulations
(2020-01-17) Facial Recognition: EU Considers Ban of up to Five Years
(2019-12-14) The US, Like China, Has About One Surveillance Camera for Every Four People, Says Report
(2019-12-11) Moscow Cops Sell Access to City CCTV, Facial Recognition Data
(2019-12-07) Proposal To Require Facial Recognition For US Citizens At Airports Dropped
(2019-12-03) Homeland Security Wants Airport Face Scans for US Citizens

Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Tuesday June 30 2020, @12:02PM (7 children)

    by Anonymous Coward on Tuesday June 30 2020, @12:02PM (#1014453)

    Let's refer to the actual study:
    https://www.comparitech.com/blog/vpn-privacy/facial-recognition-study/ [comparitech.com]

    The software doesn't output a simple yes or no answer. It outputs a confidence level. The errors with the politicians happened with the confidence level set to only 80%. With 530 American politicians tested, you'd expect up to 106 errors. Instead the software produced 32 errors. With the confidence level set to 95%, it produced no errors, even though, by chance, it should have produced up to 26. So the software is actually quite conservative with its matching and is significantly outperforming its claims. With UK politicians, it performed even better. However, for some reason, the study decided to consider multiple false matches of one person as a single error, instead of once for every photo it incorrectly matched against. This is a very strange decision which makes the system seem more accurate than it is, but the effect is probably small.

    Now onto the racial bias. It's true that the software misidentified nonwhite politicians at a higher rate than nonwhite ones. But does that mean that the software is racially biased? Well, maybe. Let's assume that skin color is a really major factor in identification, such that the software will almost never misidentify someone as someone of another race. Now the problem here is that the study isn't designed to be an honest assessment of the accuracy of the system, but rather is trying to prove that it can misidentify anyone as a criminal (even though Amazon recommends a 99% confidence threshold for this use, where the software made zero errors). But the software is effectively comparing white people against white mugshots, and black people against black mugshots. As everyone knows, the rate of arrests is not equal among races. Therefore, we know that although the nonwhite politicians were misidentified at a higher rate, they also had a disproportionately large number of opportunities to be misidentified, which is not accounted for in the results. This is doubly true when you consider that nonwhites are not only overrepresented in arrests, but underrepresented in politics.

    A fair assessment would compare all the pictures being tested against a database that has a similar distribution with respect to the attribute being measured, and it should also be tested where the test groups are broken down into the groups you want to check, and tested against a database containing only those kinds of people. It doesn't have to be race. It could be, for example, people wearing hats, or with beards, or whatever.

    Because the study does not break down the racial composition of the mugshots it uses, there's no way to know, other than going to the source and taking a sample, what the source material actually included. I don't have time to do this right now.

    Furthermore, the entire notion of "racially biased" implicitly assumes that the use of the software is for some racially sensitive purpose, as opposed to, say, monitoring which employees spend too much time on lunch break, or identifying frequent repeat customers to a business, or whatever other completely race-irrelevant task it could also be used for.

    Starting Score:    0  points
    Moderation   +4  
       Interesting=2, Informative=1, Underrated=1, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 2) by Aegis on Tuesday June 30 2020, @02:32PM

    by Aegis (6714) on Tuesday June 30 2020, @02:32PM (#1014502)

    Furthermore, the entire notion of "racially biased" implicitly assumes that the use of the software is for some racially sensitive purpose, as opposed to, say, monitoring which employees spend too much time on lunch break

    Being wrongly labeled lazy and then fired is the exact types of bias people are worried about...

  • (Score: 4, Interesting) by sjames on Tuesday June 30 2020, @06:35PM (5 children)

    by sjames (2882) on Tuesday June 30 2020, @06:35PM (#1014631) Journal

    However, we know that in practice, police are setting the confidence lower and then blindly accepting the "matches" as fact. Also they are using poor quality input photos and will ignore a warning that the confidence level is insufficient to support probable cause. In short, if the software CAN be mis-used, not only will it be, it already has been. It is perfectly fair to test the software as it is actually used in the wild.

    Also, given your explanation, that still means that if you are black, through no fault of your own, you are more likely to have the police show up to arrest you out of the blue for a crime you had no involvement in. Attributing that to the input dataset is not much of a consolation.

    • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @01:43AM (3 children)

      by Anonymous Coward on Wednesday July 01 2020, @01:43AM (#1014832)

      Attributing that to the input dataset is not much of a consolation.

      True enough, but it's misleading to say "the algorithm is biased" if actually the problem is "the database is biased" or even "the algorithm didn't solve racism in policing but still did better than humans." It's certainly possible, at least in principle, to build a software system that's not biased. It's probably not possible to find humans with absolutely no bias. This is why I think it's a mistake to ban facial recognition. It has the potential to significantly improve racism in policing while also helping catch more criminals.

      The study I linked contains a digression on the software used by police departments, but I ignored it because it wasn't the software being tested and it wasn't the topic of this article either. It seems that the Amazon software performs quite a bit better than the software used by the police. Whether that's because the police misuse the software, or their software just isn't as good, or what, I have no way of knowing.

      • (Score: 2) by sjames on Wednesday July 01 2020, @08:27AM

        by sjames (2882) on Wednesday July 01 2020, @08:27AM (#1014913) Journal

        But DOES it do better than humans? Does it still do better than humans as deployed and used in the field?

        In the situation in Detroit, the incorrect match made by the software was immediately apparent to any human that bothered to look (spoiler, the police didn't until the wrongly arrested man held the crime photo up to his face).

      • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @11:40AM (1 child)

        by Anonymous Coward on Wednesday July 01 2020, @11:40AM (#1014951)

        History has shown us, this won't end well. Maybe you've never been in legal trouble, but when you get flagged as a shoplifter or something, I hope they listen to you, and don't just treat you as another lying criminal who was obviously bad or your gave wouldn't have been flagged.
        Whenever you think about how people will use a technology, imagine the stupidest people you can. Your imagination is probably inadequate to imagine what idiocy they will actually practice.

        • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @05:06PM

          by Anonymous Coward on Wednesday July 01 2020, @05:06PM (#1015068)

          When fingerprinting became available, it helped catch more guilty criminals while exonerating the innocent. When DNA evidence became available, the same thing happened. This is true even though neither of those technologies was completely immune to problems or abuses. Now facial recognition is becoming available and... people think it's different this time, for some reason. It's almost never "different this time."

    • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @12:29PM

      by Anonymous Coward on Wednesday July 01 2020, @12:29PM (#1014972)
      OK maybe the software is a bit more racially biased than the guns the cops are using the kill black people.

      Seriously speaking, I think the "racially biased facial recognition" thing is more a lighting and contrast issue.