Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Wednesday April 14 2021, @02:15PM   Printer-friendly
from the pattern-recognition dept.

Verge:

A man who was falsely accused of shoplifting has sued the Detroit Police Department for arresting him based on an incorrect facial recognition match. The American Civil Liberties Union filed suit on behalf of Robert Williams, whom it calls the first US person wrongfully arrested based on facial recognition.

[...] The ACLU claims Detroit police used facial recognition under circumstances that they should have known would produce unreliable results, then dishonestly failed to mention the system's shortcomings — including a "woefully substandard" image and the known racial bias of recognition systems.

[...] Thousands of law enforcement agencies have allegedly used facial recognition tech to identify suspects. But a backlash has led several cities to ban the practice, while Microsoft, IBM, and Amazon have pledged to keep their systems out of police hands.

Also at The Washington Post, The Detroit News, and others.


Original Submission

Related Stories

Producing More but Understanding Less: The Risks of AI for Scientific Research 28 comments

https://arstechnica.com/science/2024/03/producing-more-but-understanding-less-the-risks-of-ai-for-scientific-research/

Last month, we witnessed the viral sensation of several egregiously bad AI-generated figures published in a peer-reviewed article in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals.

As Ars Senior Health Reporter Beth Mole reported, looking closer only revealed more flaws, including the labels "dissilced," "Stemm cells," "iollotte sserotgomar," and "dck." Figure 2 was less graphic but equally mangled, rife with nonsense text and baffling images. Ditto for Figure 3, a collage of small circular images densely annotated with gibberish.

[...] While the proliferation of errors is a valid concern, especially in the early days of AI tools like ChatGPT, two researchers argue in a new perspective published in the journal Nature that AI also poses potential long-term epistemic risks to the practice of science.

Molly Crockett is a psychologist at Princeton University who routinely collaborates with researchers from other disciplines in her research into how people learn and make decisions in social situations. Her co-author, Lisa Messeri, is an anthropologist at Yale University whose research focuses on science and technology studies (STS), analyzing the norms and consequences of scientific and technological communities as they forge new fields of knowledge and invention—like AI.

[...] The paper's tagline is "producing more while understanding less," and that is the central message the pair hopes to convey. "The goal of scientific knowledge is to understand the world and all of its complexity, diversity, and expansiveness," Messeri told Ars. "Our concern is that even though we might be writing more and more papers, because they are constrained by what AI can and can't do, in the end, we're really only asking questions and producing a lot of papers that are within AI's capabilities."

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Insightful) by Anonymous Coward on Friday May 21 2021, @06:05AM

    by Anonymous Coward on Friday May 21 2021, @06:05AM (#1137462)

    This is good, and I hope he wins too.

    I'm not against facial recognition per se. I'm not against new tools for the police. I'm not aganst making their work easier through technology.

    But the very moment when they stop using their human brain, start discarding the (huge!) fallability of the computer systems, and turn themselves from "the people protecting society" into "the unthinking enforcers of the machine" is also the moment when their power turns from "the good" into an oppressive regime, and thus the downfall of society as we know it.

    Yes, that's a bit exaggerated, our society is more stable than that, but you get my gist.

    So let them win this lawsuit and remind every police (and other) officer in this country that they are never, never, ever allowed to switch off their brain and compassion, no matter how convenient that may be for the more lazy ones of them!
    (it is acknowledged that there are many, many non-lazies that you never hear about ...)

(1)