Submitted via IRC for AndyTheAbsurd
n February 2013, Eric Loomis was found driving a car that had been used in a shooting. He was arrested, and pleaded guilty to eluding an officer. In determining his sentence, a judge looked not just to his criminal record, but also to a score assigned by a tool called COMPAS.
Developed by a private company called Equivant (formerly Northpointe), COMPAS—or the Correctional Offender Management Profiling for Alternative Sanctions—purports to predict a defendant's risk of committing another crime. It works through a proprietary algorithm that considers some of the answers to a 137-item questionnaire.
COMPAS is one of several such risk-assessment algorithms being used around the country to predict hot spots of violent crime, determine the types of supervision that inmates might need, or—as in Loomis's case—provide information that might be useful in sentencing. COMPAS classified him as high-risk of re-offending, and Loomis was sentenced to six years.
He appealed the ruling on the grounds that the judge, in considering the outcome of an algorithm whose inner workings were secretive and could not be examined, violated due process. The appeal went up to the Wisconsin Supreme Court, who ruled against Loomis, noting that the sentence would have been the same had COMPAS never been consulted. Their ruling, however, urged caution and skepticism in the algorithm's use.
Source: https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/
(Score: 2) by AthanasiusKircher on Monday January 22 2018, @01:35AM (3 children)
The thing is -- it still depends. TFA only cites broad accuracy statistics, which are all in the ballpark of 63-67% in terms of predicting recidivism.
But that's not the only thing that matters for an algorithm like this. What about the >1/3 of cases where it predicts WRONG? There are errors that are "reasonable" and then there are errors that are bizarre or inexplicable. We've all seen examples of machine learning algorithms where it sometimes gets things SPECTACULARLY WRONG in some cases, where it's inexplicable how the algorithm could make such an error.
And that should also be a strong consideration here. It's one thing to say it got things "right" 2/3 of the time, which is on par with other metrics or algorithms or informed opinions that validate said "right answers." But let's say the algorithm also predicted things SPECTACULARLY WRONG 10% of the time, i.e., it flagged people as "high risk" for absolutely no apparent reason that any human can understand. Or, conversely, maybe it wanted to let Charles Manson out on parole after 10 days. Those would be serious concerns about whether we'd actually want to use said algorithm, even if overall it still gets ~2/3 correct that other metrics did. (By the way, it's also distinctly possible for it to get a DIFFERENT SET of 2/3 correct from human judges or the other simpler algorithms, which could also be a concern in some cases... e.g., it might be making some obvious errors.)
I don't get the sense from TFA that this is an issue in the case here, but it's important to consider not only overall "success rate" of an algorithm, but also whether it behaves in a way that makes some sense. Sure, there's the possibility that an apparently "irrational" algorithm could be finding a pattern that humans haven't discerned yet. But given how many algorithms have been known to fail in bizarre ways, we should probably exercise caution when dealing with people's lives.
(Score: 2) by mhajicek on Monday January 22 2018, @03:13AM (2 children)
What is the actual recidivism rate? If it's around 65% the algorithm could just return "Yes" every time and be right 65% of the time.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 2) by AthanasiusKircher on Monday January 22 2018, @04:05AM
Yep -- that's another obviously stat not discussed here. Though my impression from reading other stuff is that this algorithm generates a number on a scale rather than just "yes" or "no," so I'm actually not sure what the 65% or whatever accuracy means.
(Score: 3, Informative) by maxwell demon on Monday January 22 2018, @09:31AM
Indeed, the only meaningful information is to give both the false positive and the false negative rate, separately.
The Tao of math: The numbers you can count are not the real numbers.