Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday February 19 2020, @02:19PM   Printer-friendly
from the revolving-door dept.

Algorithms 'consistently' more accurate than people in predicting recidivism, study says:

In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.

[...] "Risk assessment has long been a part of decision-making in the criminal justice system," said Jennifer Skeem, a psychologist who specializes in criminal justice at UC Berkeley. "Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism. That's consistent with a long line of research comparing humans to statistical tools."

"Validated risk-assessment instruments can help justice professionals make more informed decisions," said Sharad Goel, a computational social scientist at Stanford University. "For example, these tools can help judges identify and potentially release people who pose little risk to public safety. But, like any tools, risk assessment instruments must be coupled with sound policy and human oversight to support fair and effective criminal justice reform."

The paper—"The limits of human predictions of recidivism"—was slated for publication Feb. 14, 2020, in Science Advances. Skeem presented the research on Feb. 13 in a news briefing at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Wash. Joining her were two co-authors: Ph.D. graduate Jongbin Jung and Ph.D. candidate Zhiyuan "Jerry" Lin, who both studied computational social science at Stanford.

More information:
Z. Lin, et al. The limits of human predictions of recidivism [open], Science Advances (DOI: 10.1126/sciadv.aaz0652)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @12:08AM (1 child)

    by Anonymous Coward on Thursday February 20 2020, @12:08AM (#960089)

    Let me delve into this a little bit more.

    An example of a heuristic might be an antivirus that uses checksums to evaluate if something is a computer virus. These heuristics may not always be right but they can generally be right. But if the antivirus is wrong it will be wrong every time (if not updated), it doesn’t have a way to evaluate the outcome to search for a better solution next time.

    A chess AI can evaluate the outcome. Did it win the game or lose. If it lost it can then try to do more computations and store some information to help it win the next time. For something to be intelligent it needs to be able to evaluate the outcome (ie: determine if it’s desired or not) and seek a different set of actions next time if the outcome is undesired so that it can get a desired outcome next time.

  • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @04:12PM

    by Anonymous Coward on Thursday February 20 2020, @04:12PM (#960323)

    Can't ya read the signs, boy? No loitering. No littering. No diving. No delving. Move along now.