Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday January 21 2018, @06:41PM   Printer-friendly
from the crowdsourced-sentencing dept.

Submitted via IRC for AndyTheAbsurd

n February 2013, Eric Loomis was found driving a car that had been used in a shooting. He was arrested, and pleaded guilty to eluding an officer. In determining his sentence, a judge looked not just to his criminal record, but also to a score assigned by a tool called COMPAS.

Developed by a private company called Equivant (formerly Northpointe), COMPAS—or the Correctional Offender Management Profiling for Alternative Sanctions—purports to predict a defendant's risk of committing another crime. It works through a proprietary algorithm that considers some of the answers to a 137-item questionnaire.

COMPAS is one of several such risk-assessment algorithms being used around the country to predict hot spots of violent crime, determine the types of supervision that inmates might need, or—as in Loomis's case—provide information that might be useful in sentencing. COMPAS classified him as high-risk of re-offending, and Loomis was sentenced to six years.

He appealed the ruling on the grounds that the judge, in considering the outcome of an algorithm whose inner workings were secretive and could not be examined, violated due process. The appeal went up to the Wisconsin Supreme Court, who ruled against Loomis, noting that the sentence would have been the same had COMPAS never been consulted. Their ruling, however, urged caution and skepticism in the algorithm's use.

Source: https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/

Also at Wired and Gizmodo


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by wonkey_monkey on Sunday January 21 2018, @07:57PM (7 children)

    by wonkey_monkey (279) on Sunday January 21 2018, @07:57PM (#625762) Homepage

    Popular Algorithm is No Better at Predicting Crimes than Random People

    If you could say the same thing about a self-driving car algorith, it'd be a compliment (unless by "no better" you meant "actually worse", and not the colloquial sense of "only as good as").

    Being "as good as random people" is actually quite a good thing for a lot of algorithms. It means you can get the job done without hiring people, random or otherwise.

    That said, what they have shown in this case is that the 137-question questionnaire this particluar algorithm is based on is almost entirely superfluous. You can get as good results just looking at age and previous convictions.

    --
    systemd is Roko's Basilisk
    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Sunday January 21 2018, @08:26PM

    by Anonymous Coward on Sunday January 21 2018, @08:26PM (#625780)

    You can get as good results just looking at age and previous convictions
    So is the questionnaire just hiding that very alg?

    I have seen a box of matchboxes play tic-tac-toe. No electronic computer involved. Computers just run programs. Programs do not necessarily need to be electronic.

  • (Score: 3, Insightful) by sjames on Sunday January 21 2018, @09:35PM

    by sjames (2882) on Sunday January 21 2018, @09:35PM (#625813) Journal

    That depends. You would not let a self driving semi on the road if it was no better than random people off the street. It would at least have to be not worse than the holder of a CDL.

    According to TFA, COMPAS is slightly worse than a pool of random unqualified people.

  • (Score: 2) by c0lo on Sunday January 21 2018, @11:33PM

    by c0lo (156) Subscriber Badge on Sunday January 21 2018, @11:33PM (#625870) Journal

    Popular Algorithm is No Better at Predicting Crimes than Random People

    ...
    Being "as good as random people" is actually quite a good thing for a lot of algorithms. It means you can get the job done without hiring people, random or otherwise.

    Context, mate, context... No, scratch that, the relevant link is formal languages [xkcd.com]
    More to the point:

    Mmmm... forget self-driving cars, would you like your sentence length proceed on the base of a jury chosen at random in the day of sentencing, so that they had no opportunity to hear the evidence?
    Because the above analogy seems even better than the situation of the current context: an algo hears the answer to some questions which has no relation to the case, algo which has worse predictive power than random humans.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 2) by AthanasiusKircher on Monday January 22 2018, @01:35AM (3 children)

    by AthanasiusKircher (5291) on Monday January 22 2018, @01:35AM (#625910) Journal

    The thing is -- it still depends. TFA only cites broad accuracy statistics, which are all in the ballpark of 63-67% in terms of predicting recidivism.

    But that's not the only thing that matters for an algorithm like this. What about the >1/3 of cases where it predicts WRONG? There are errors that are "reasonable" and then there are errors that are bizarre or inexplicable. We've all seen examples of machine learning algorithms where it sometimes gets things SPECTACULARLY WRONG in some cases, where it's inexplicable how the algorithm could make such an error.

    And that should also be a strong consideration here. It's one thing to say it got things "right" 2/3 of the time, which is on par with other metrics or algorithms or informed opinions that validate said "right answers." But let's say the algorithm also predicted things SPECTACULARLY WRONG 10% of the time, i.e., it flagged people as "high risk" for absolutely no apparent reason that any human can understand. Or, conversely, maybe it wanted to let Charles Manson out on parole after 10 days. Those would be serious concerns about whether we'd actually want to use said algorithm, even if overall it still gets ~2/3 correct that other metrics did. (By the way, it's also distinctly possible for it to get a DIFFERENT SET of 2/3 correct from human judges or the other simpler algorithms, which could also be a concern in some cases... e.g., it might be making some obvious errors.)

    I don't get the sense from TFA that this is an issue in the case here, but it's important to consider not only overall "success rate" of an algorithm, but also whether it behaves in a way that makes some sense. Sure, there's the possibility that an apparently "irrational" algorithm could be finding a pattern that humans haven't discerned yet. But given how many algorithms have been known to fail in bizarre ways, we should probably exercise caution when dealing with people's lives.

    • (Score: 2) by mhajicek on Monday January 22 2018, @03:13AM (2 children)

      by mhajicek (51) on Monday January 22 2018, @03:13AM (#625931)

      What is the actual recidivism rate? If it's around 65% the algorithm could just return "Yes" every time and be right 65% of the time.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2) by AthanasiusKircher on Monday January 22 2018, @04:05AM

        by AthanasiusKircher (5291) on Monday January 22 2018, @04:05AM (#625941) Journal

        Yep -- that's another obviously stat not discussed here. Though my impression from reading other stuff is that this algorithm generates a number on a scale rather than just "yes" or "no," so I'm actually not sure what the 65% or whatever accuracy means.

      • (Score: 3, Informative) by maxwell demon on Monday January 22 2018, @09:31AM

        by maxwell demon (1608) on Monday January 22 2018, @09:31AM (#626010) Journal

        Indeed, the only meaningful information is to give both the false positive and the false negative rate, separately.

        --
        The Tao of math: The numbers you can count are not the real numbers.