Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday July 28 2018, @06:50PM   Printer-friendly
from the they're-criminals dept.

The American Civil Liberties Union, in an effort to demonstrate the dangers of face recognition technology, ran photos of members of Congress against a database of mug shots using Amazon Rekognition software. That test incorrectly identified 28 legislators as criminals (cue the jokes - yes, the Congress members were confirmed to be elsewhere at the time). They hope that demonstrating that this risk hits close to home will get Congress more interested in regulating the use of this technology.

The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance.

[...] If law enforcement is using Amazon Rekognition, it’s not hard to imagine a police officer getting a “match” indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins. Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by urza9814 on Monday July 30 2018, @06:12PM

    by urza9814 (3954) on Monday July 30 2018, @06:12PM (#714855) Journal

    One of the major issues here is that computer analysis can be used to conceal bias.

    Look at the criminal population with the unbiased eye of an AI

    That's not possible -- there's a bias inherent in the dataset, and if you use a biased dataset to train an AI you're going to end up with a biased AI. "The prison population" doesn't directly tell you anything about who commits crime, it only tells you about who gets caught. If a certain population is over-represented there, it COULD be because they inherently commit more crimes...or because they're targeted more by police, or because they're less able to earn gainful employment and therefore more often forced to resort to crime, or because they're less likely to find competent legal representation, or because they're less competent at the crimes they do commit, or any number of other reasons. If you're only training the AI based on what humans have already done, then it can only learn to mimic humans -- including mimicking our mistakes. So saying that the AI is inherently unbiased is no different from saying the original humans are inherently unbiased. Do you really think the US justice system never makes a mistake and has zero bias in its activities?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2