Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday February 19 2020, @02:19PM   Printer-friendly
from the revolving-door dept.

Algorithms 'consistently' more accurate than people in predicting recidivism, study says:

In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.

[...] "Risk assessment has long been a part of decision-making in the criminal justice system," said Jennifer Skeem, a psychologist who specializes in criminal justice at UC Berkeley. "Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism. That's consistent with a long line of research comparing humans to statistical tools."

"Validated risk-assessment instruments can help justice professionals make more informed decisions," said Sharad Goel, a computational social scientist at Stanford University. "For example, these tools can help judges identify and potentially release people who pose little risk to public safety. But, like any tools, risk assessment instruments must be coupled with sound policy and human oversight to support fair and effective criminal justice reform."

The paper—"The limits of human predictions of recidivism"—was slated for publication Feb. 14, 2020, in Science Advances. Skeem presented the research on Feb. 13 in a news briefing at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Wash. Joining her were two co-authors: Ph.D. graduate Jongbin Jung and Ph.D. candidate Zhiyuan "Jerry" Lin, who both studied computational social science at Stanford.

More information:
Z. Lin, et al. The limits of human predictions of recidivism [open], Science Advances (DOI: 10.1126/sciadv.aaz0652)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @08:19PM (6 children)

    by Anonymous Coward on Wednesday February 19 2020, @08:19PM (#959988)

    I fully agree with you that any moral system with longevity is based on something more than just feels. In my opinion that "something" is indeed the exact opposite of feels - objective truth. Everybody claims to want to know the truth, but we don't really - because sometimes the truth sucks. 'So what'd you think of my speech?' 'How do I look in this dress?' Softball examples, because the ones that matter genuinely hurt to the point that we try to push out of our mind.

    So in this topic the goal is straight forward. You have an algorithm. You want to improve the accuracy of this algorithm. Is the algorithm going to be more, or less, accurate by choosing to exclude race? Why not also start sentencing women the same as we do men for "fairness"? Few would argue against the fact that men are more naturally predisposed to criminality than women. Somehow put a billion babies on a new planet with no knowledge of anything off that planet, and you're going to see the men -both the weak and the strong- as the aggressors.

    It's a matter of genetics. I do not believe such inherent proclivities end at genders. And I think if you look at the data, worldwide, there is no logical case for such a belief. It's something we want to believe because it makes us feel better. I mean I'd love to not believe what I do. Exact same way I felt as turned from a Christian to some sort of an agnostic if not atheist. I didn't want to believe that, but I believe that as a thinking human I should always be guided, first and foremost, by the truth and my mind - even if I don't like where it takes me.

  • (Score: 4, Insightful) by meustrus on Wednesday February 19 2020, @09:58PM (4 children)

    by meustrus (4961) on Wednesday February 19 2020, @09:58PM (#960031)

    You want to improve the accuracy of this algorithm.

    No, I want to reduce recidivism. A biased algorithm is counter to that purpose.

    Why not also start sentencing women the same as we do men for "fairness"? Few would argue against the fact that men are more naturally predisposed to criminality than women.

    I would argue for equal time for an equal crime. Under your framework, that would (fairly) result in lesser sentencing for women. We don't need to "correct" this imbalance any more than Harvard needs to "correct" for too many Asians scoring well on standardized tests.

    But under what circumstances is recidivism likely? If you allow race to dominate the algorithm, it will detect the correlation and stop there. Remove race from the calculations, and I would expect it to pick up on more subtle differences in social situation, criminal connections, economic opportunity outside of crime, and willingness to admit wrongdoing and pay the price for those mistakes.

    That's just what I'd expect to see. I am a mere human, though, and apparently this "algorithm" is better at interpreting those variables than I am.

    What if the reason the algorithm is more accurate is because it disqualifies race as an input? Consider that when you or I see the mugshot of a black man, we might be more likely to instantly associate him with a bunch of the negatives I mentioned earlier: lack of a social support structure, links to gang members, little possibility of finding honest work, stubborn defiance to accept the judgement of the court. If the black man was actually an active churchgoer going to community college who got picked up on a drug bust when he went home over spring break, those associations would be dead wrong.

    And in this situation specifically, the system would punish exactly the kind of man we need influencing his old neighborhood, bringing home wealth, economic opportunity, and hope. It can have an impact on the crime rates of an entire community.

    As humans, our first impression of him as a typical thug would be hard to shake. Who's to say AI doesn't have the same first-impression bias? Wouldn't it be harsher on the man just because he's black, despite all the evidence he's trying to be a productive member of society?

    If a white man and a black man are living exactly the same life, the AI should be able to pick up on those details and make the same decision. Allowing it to know that the black man is black, and therefore associated with a bunch of gangbangers in a completely different situation, is just going to confuse things.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @07:16AM (3 children)

      by Anonymous Coward on Thursday February 20 2020, @07:16AM (#960226)

      Ah! I think you're not seeing both sides of the coin here. Algorithms don't just "detect a correlation and stop there", everything is given an appropriate *RELATIVE* weighting. By considering race you can actually reduce unfair sentencing for certain groups. Imagine, for instance, being associated with individuals who have engaged in crime, is *as an aggregate* a high predictor for future recidivism. Yet imagine when you look at only the black population that they tend to have a disproportionately high rate of being associated with individuals who have engaged in crime. If that trait is not as predictive for blacks, then the algorithm will reduce its weighting for it when dealing with a black individual - which can result in more fair sentencing.

      Yet if you exclude race, then it would result in harsher sentences for blacks due to the disproportionate (but not necessarily predictive) dominance of this trait in blacks. Do you understand what I'm saying? This is a big part of the reason that whole "biased algorithms" was ironic nonsense mostly spread by people who had no clue how things such as neural nets work. Algorithms are not based, they simply take data and derive conclusions. Trying to limit that data to achieve desired outcomes is exactly how you bias algorithms, and it's a good way to have everything blow up in your face and actually become biased.

      In other words, Accuracy = Accuracy.

      As for sentencing, equal time for equal crime sounds like the ideal until you consider reality. Two people are arrested for the same crime. Let's imagine that somehow you know every statement they make is genuine. One is repentant and expresses desire to reform and make amends. The other is remorseless and expresses a desire to get out so he can "off that bitch" who "got him arrested." This is why women receive much lesser penalties for the exact same crimes than men do. They tend to fall disproportionately more into the former group than the latter. This is also why sentencing takes into account past history. There's a difference between somebody who made a mistake and somebody who is clearly living a life of crime. The latter poses a much bigger threat to society and *not* keeping them locked away as long as possible is doing little more than punishing whoever their next victim will end up being.

      • (Score: 2) by meustrus on Thursday February 20 2020, @11:05PM (2 children)

        by meustrus (4961) on Thursday February 20 2020, @11:05PM (#960482)

        Neural networks are not perfect. If you allow them to find the cheap correlation, they will use the cheap correlation. They will see the rare black man whose gang associations are completely behind him as an outlier that can be safely ignored. Just like humans do.

        More importantly, the AI needs to be able to handle changing conditions in the future. What if its trainers hadn't provided it with any examples of upstanding black citizens who were in the wrong place at the wrong time? What if there are certain crimes where those outliers just haven't existed yet?

        If allowed to use racial bias, the AI will assume that he's more guilty because he's black, not that he's less guilty because the crime suddenly looks more like a completely different pattern.

        The practical effect could easily be to insert racial bias into areas that haven't had significant case studies. I'd hardly consider it correct to say that black men who committed financial crimes are more likely than similar white men to do so again, but an AI trained on race might well make that leap.

        Remember my original medical example. Just because the AI can predict outcomes really well based on the watermark from the lab that did the test does not mean that's a useful answer. We were trying to look at the medical facts, not the statistical context of those facts.

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
        • (Score: 0) by Anonymous Coward on Friday February 21 2020, @07:13AM (1 child)

          by Anonymous Coward on Friday February 21 2020, @07:13AM (#960622)

          "AI" is such a peculiar field in that people who clearly have 0 experience in it, constantly think their let's say 'intuitive' understanding is somehow relevant. It's not. Most of everything you just said above is plainly wrong. The entire "magic" behind neural nets is precisely because of their ability to efficiently produce RELATIVE correlations that quickly fall outside our ability to mentally retain (let alone visualize or see at a glance). For instance there's only one arabic numeral that has a "/" in it. And that's a 7. However, it's not like the network will pick up on pixels 3,5,7 (if we're looking at a simplified 3x3 convolution of an entire image) and go 'wellp, must be a 7!' No, it instead looks at correlations between that specific datum and how it relates to its mapping of other numbers. For instance it could a poorly written 1 or part of an 8, etc.

          Your 'don't let the "AI" know he's black' is exactly how you get incorrect and heavily biased outcomes. Biased input = biased output. And the fun thing is, is that the bias doesn't work like you might think. It could result in less severe sentencing, but it could also easily result in more severe sentencing. In my opinion the latter is probably more likely due to the fact that even the blacks less inclined towards recidivism are going to be more likely to be surrounded by environmental factors correlated with recidivism in other races.

          So for instance, let's imagine you want the algorithm to stop telling you things are 7s (even when they are). So you force it to stop considering pixels 5 and 7, which happen to be strongly correlated with 7s. You're not going to get that result. You're just going to get some pretty stupid results with a very good chance that it's going to start showing you even more sevens since now factors that were not strongly correlated with 7 previously, suddenly become so. For instance anything with pixels 123 set is now very likely a 7, because of how you've biased your input.

          If it was not clear in the above examples I was referencing a grid of the style:

          123
          456
          789

          This is, once again, why the pursuit of "truth" is the one and only correct ideology. You don't "fix" the truth by hiding parts of it. You just end up with even more broken results which you then flail about uselessly trying to fix with even more lies.

          • (Score: 2) by meustrus on Monday February 24 2020, @02:35PM

            by meustrus (4961) on Monday February 24 2020, @02:35PM (#961833)

            "AI" is such a peculiar field in that people who clearly have 0 experience in it, constantly think their let's say 'intuitive' understanding is somehow relevant. It's not.

            That cuts both ways, AC. For what it's worth, I am a software engineer with limited but direct experience with the simpler AI models.

            The thing is, nobody really understands how a trained deep learning AI comes to its conclusions. Some people understand the theory and the implementation. But it's nothing but hubris to insist that the the AI is actually always correct.

            Take the recent instance of Tesla Autopilot being fooled into thinking a 35 speed limit is actually 85. That problem is exactly in line with the common problem you brought up. The AI only knows what it's been trained on, and nothing more. It's trainers didn't think to include a 3 with a super long middle line, so it had to guess. It guesses wrong.

            In this case, race is demonstrably not a deciding factor in recidivism. It is not any of pixels 1-9. It is a correlating factor. It's like there was a pixel 10, and whenever there is a pixel 10, the number is a 7 50% of the time. If you include the correlating factor, the AI will misinterpret poorly written 1s as 7s whenever pixel 10 is set.

            Is that right? Statically, yes. It is much more likely to be a 7 if you consider the correlating factor. But that doesn't mean it's factually correct or useful.

            Under your dogma, we should throw in all the correlations we can. Sexual orientation. Blood type. Astrological sign. What's the harm? It's all just more facts, right?

            It isn't. These are all irrelevant categories that will probably produce some statistically significant correlation, but which are nothing but dangerous distractions.

            The information they add is inextricably linked to the human bias that developed the system. We predetermined that these categories are statistically significant, and they cut people into such broad groups with somewhat similar circumstances that they are sure to produce some correlation.

            But it's a lie to tell the AI that this correlation is more significant than all the others that were left out. The only correct realization of your information dogma would be a quark-by-quark full body subatomic scan. Anything lower resolution is pre-filtered.

            Honestly, it's questionable to use an AI with reduced information at all. That's why Tesla Autopilot is worse at reading 3s than humans.

            But to the extent we can make do with limited information, we need to be careful about what information we predetermined is significant. Nonessential correlating factors, especially those based on 19th century pseudoscience like race, must be limited as much as possible.

            --
            If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
  • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @07:01AM

    by Anonymous Coward on Thursday February 20 2020, @07:01AM (#960220)

    I fully agree with you that any moral system with longevity is based on something more than just feels.

    And yet that's what all of them are ultimately based on anyway, even if they try to claim otherwise: Subjective values.