Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday February 19 2020, @02:19PM   Printer-friendly
from the revolving-door dept.

Algorithms 'consistently' more accurate than people in predicting recidivism, study says:

In a study with potentially far-reaching implications for criminal justice in the United States, a team of California researchers has found that algorithms are significantly more accurate than humans in predicting which defendants will later be arrested for a new crime.

[...] "Risk assessment has long been a part of decision-making in the criminal justice system," said Jennifer Skeem, a psychologist who specializes in criminal justice at UC Berkeley. "Although recent debate has raised important questions about algorithm-based tools, our research shows that in contexts resembling real criminal justice settings, risk assessments are often more accurate than human judgment in predicting recidivism. That's consistent with a long line of research comparing humans to statistical tools."

"Validated risk-assessment instruments can help justice professionals make more informed decisions," said Sharad Goel, a computational social scientist at Stanford University. "For example, these tools can help judges identify and potentially release people who pose little risk to public safety. But, like any tools, risk assessment instruments must be coupled with sound policy and human oversight to support fair and effective criminal justice reform."

The paper—"The limits of human predictions of recidivism"—was slated for publication Feb. 14, 2020, in Science Advances. Skeem presented the research on Feb. 13 in a news briefing at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle, Wash. Joining her were two co-authors: Ph.D. graduate Jongbin Jung and Ph.D. candidate Zhiyuan "Jerry" Lin, who both studied computational social science at Stanford.

More information:
Z. Lin, et al. The limits of human predictions of recidivism [open], Science Advances (DOI: 10.1126/sciadv.aaz0652)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by ikanreed on Wednesday February 19 2020, @02:54PM (32 children)

    by ikanreed (3164) Subscriber Badge on Wednesday February 19 2020, @02:54PM (#959861) Journal

    They're attempting to combat previous research that found that lay people with no expertise were better than the de jure standards used by parole boards.

    By also comparing lay people with no expertise from mechanical turk against, they show, only under certain datasets, the algorithm outperforms random people from the internet.

    Whoop. De. Fucking. Doo.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday February 19 2020, @04:06PM (22 children)

      by Anonymous Coward on Wednesday February 19 2020, @04:06PM (#959890)

      Quite the opposite. The paper showed that humans outperformed the algorithm only certain unrealistic conditions such as instantaneous results feedback and limiting 'input' criteria to only the most predictive variables. When presented with something resembling more realistic conditions the human's predictions fell somewhat substantially, yet the algorithm's performance skyrocketed.

      Most bemusingly, none of the criteria seemed to offer race. Yes discrimination is a thing, but so is the fact that a people who make up 12% of the population are responsible for about 50% of homicides and other violent crimes. It's one of the most statistically significant predictors there is. But because of political correctness, you can't use it. I'm curious what the accuracy (both man and machine) would be if one of the input datums was a photograph of the suspect. That itself is a measure of discrimination:reality. If the accuracy goes down, we can say that discrimination is a bigger factor that reality. If it goes up, then we could say that reality is a bigger factor than discrimination.

      • (Score: 4, Touché) by Anonymous Coward on Wednesday February 19 2020, @04:17PM (7 children)

        by Anonymous Coward on Wednesday February 19 2020, @04:17PM (#959897)

        Race is a correlate of other factors, many of them inherited from having a poor start in life. The kid has a poor start in life because his/her parents had a poor start in life. And so on back to... oh silly me, I forgot racism is over, slavery is over. It must be just the skin color that causes recidivism after all. Well done, you solved it.

        • (Score: 1, Insightful) by Anonymous Coward on Wednesday February 19 2020, @07:42PM (6 children)

          by Anonymous Coward on Wednesday February 19 2020, @07:42PM (#959976)

          If your hypothesis was accurate, we'd expect to see comparable homicide/murder rates among other individuals after adjusting for socioeconomic status. We do not, and it's not even remotely close. You also would not see things such as this [wikipedia.org]. That was a long-term major study that was supposed to prove once and for all that all this genetics stuff was a distant second to environmental factors. They took a large number of well educated, wealthier whites who were interested in adopting black children. The problem is that the study showed the exact opposite of what it was supposed to show.

          Adopted children who came from two black parents had an average IQ at age 17 of 83.7. Adopted children who came from two white parents had an average IQ at age 17 of 101.5. It's easy to still blame environmental factors, and they no doubt do play some role. But there's one really interesting "accidental" control in that experiment. Some of the adopted children were accidentally classified incorrectly. They were supposed to have come from two black parents, when in fact they came from one black and one white parent. These children, in spite of living their lives believing their parents were black, in spite of the adopters believing the same, and so on - ended up scoring about 10 points higher than the 'real' black children.

          In the US I think we are kind of afraid to talk about things like this, because the concern is we'll just go full Hitler. That's not an unreasonable concern. However there's a pretty large gulf of possibilities for advancement and development of society between the two absurd extremes of 'there is no such thing as genetics or race or anything - everybody is absolutely identical' and 'down with everybody except the master race.' I mean consider for a second that you are wrong. Think about what we are doing to people, and to ourselves. "Oh you can do better! You just need a bit more encouragement. Here let me pass some special laws, just for you. Let me make it much easier for you to get into a university than anybody else. I know you can do it!" And then they don't do it. How is this going to make them feel? Encouraged? Or frustrated, angry, self loathing, and spiteful? Let alone our Ahabian search for some form of bias or discrimination we're engaging in that's perhaps holding them back. What if no such thing actually exists?

          • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @11:11PM

            by Anonymous Coward on Wednesday February 19 2020, @11:11PM (#960069)

            They took a large number of well educated, wealthier whites who were interested in adopting black children.

            I'd be worried that when they get to be adolescents and hang around people of their own race, their friends will convince them to rob and kill their adoptive parents.

          • (Score: 1, Insightful) by Anonymous Coward on Wednesday February 19 2020, @11:30PM

            by Anonymous Coward on Wednesday February 19 2020, @11:30PM (#960075)

            ^ is why you don't touch the subject. That AC is one step away from eugenics based genocide.

            Also, your "taboo" topic is also scientifically questionable and doesn't take into account multi-generation effects which we are recently finding out do occur. Environmental stressors affect genetic expression, and no amount of racism-lite will make your garbage more palatable.

            I won't call you a racist since you're trying so hard to not be one, but the topic you are quivering about is simply not that important, and your whinging about affirmative action measures totally ignores the reality of systemic racism.

            I think it is fair, if you want to ignore systemic racism then we're gonna ignore your whinging about affirmative action taking yer jerrrrbs.

          • (Score: 2, Informative) by edIII on Wednesday February 19 2020, @11:49PM (1 child)

            by edIII (791) on Wednesday February 19 2020, @11:49PM (#960082)

            Wow. You've changed my mind. I don't know what to say. Clearly, the Negro is genetically inferior and we just don't want to accept it because of our feelz. You've proved that all the Negro needs is some good ol' white cock in them, not that we've haven't been trying real hard for a couple hundred years mind you, but we could try harder.

            LOL, nice try Russian troll.

            Unfortunately, I'm worried your bullshit works on the white trash and white supremacists. Normal people like myself though remember plenty of black people that are highly intelligent good folks. Like an IT guy, teacher, businessman/politician, doctor, market analyst, and I'm probably forgetting other fine folks I've met.

            I've come across a few men, that happened to be black, that also happened to be quite thug like. Certainly fits the description of violent and stupid. While one might be tended to draw conclusions from that, I tend to remember the systemic racism and oppression from the war on drugs that was applied unequally to communities of color. So if we're going to be honest and get past our feelz, we need to accept that there was some engineering of those apparent failures. It's absolutely incorrect to attribute that to genetics, and not sociopolitical pressures creating environments to breed crime, despair, material deprivation, and recidivism. Especially when said recidivism revolves around the production and consumption of marijauna, something that has been applied very unequally in terms of consumption. I'm leaving stuff out, because it's still even more complicated than that.

            Then finally, I've met my share of black children. Some perhaps not so smart, and some quite sharp and quick witted. The only race that I can possibly attribute a greater overall IQ are Asian, and I still know that probably has to do with culture and home environment more than genetics.

            Your attempts to convince people that genetics play a role, and that the average black man is barely above Forest Gump, falls flat on its face for anyone living on the coasts and major cities. When you leave that for rural areas, yeah, you can find some poor black communities, and probably some unsophisticated people. On the whole though, hospitable and nice. Just like the poor white communities we all know exists, that have some bad apples in them too.

            For every low IQ black person you can find, we can match it with a low IQ individual of a different race. There's a fact for you.

            --
            Technically, lunchtime is at any moment. It's just a wave function.
            • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @06:23AM

              by Anonymous Coward on Thursday February 20 2020, @06:23AM (#960214)

              Everything is on a curve. Brilliant people can have idiotic children, but it's unlikely. And stupid people can have brilliant children, but it's also just unlikely. So for instance IQ is just one measure of intelligence, but it's at least a reasonable proxy. And adult IQ in in the latest studies is looking to be upwards of 80% [wikipedia.org] heritable. Heritability does not speak of the measured value itself, but the difference between two samples. If sample 1 has an IQ of 100, and sample 2 has an IQ of 140 - we'd expect that 32+ points of difference there are attributable to genetics, on average. I'll get into the children thing a minute. It's really really interesting, and quite weird!

              So nothing I've suggested is saying that all blacks are dumb or that all East Asians (who tend to have the highest IQs) are smart. It simply says that if you take 1000 of each group, the percent that do end up smart or dumb, is going to be radically different. This is another thing that makes our Quixotic battle against everything that might be causing the discrepancy (beyond plain old genetics) all the more destructive. Because you can be led to believe, by pure randomness, that that windmill you just knocked down has you finally on the right path.

              When you get into socioeconomic factors things get even nastier because it's so easy to reverse the order of causation. IQ is strongly correlated with wealth. And so it's easy to say 'wealth causes IQ' but that's pretty easy to disprove in a large number of ways. One is a simple logical problem. Wealth didn't always exist. It was created in some countries, but not in others. And the less naturally hospitable a country's geography is, the greater their trend towards wealth. Why might that be? I currently live right off the equator. It is absolutely beautiful. Great weather, bountiful lands, and an incredibly relaxed (to a fault) and friendly people. Even as a fool you could live off the land here without a concern in the world. The equator, in terms of basic human needs, is practically a utopia. By contrast in less hospitable areas, particularly those further off the equator, if you don't build, prepare, and maintain the rather complex systems required for just basic survival - you die. One group had a selector for high IQ, the other did not. One group is now disproportionately prosperous, the other group remains disproportionately in poverty.

              ---

              Back to IQ and children. When you're young environment plays quite a large role in IQ and general intelligence. However, as you age your IQ trends towards that 80% regardless of gains or losses during childhood. So for instance during the Minnesota adoption study you can see that every single adopted child's IQ dropped from age 7 to age 17. The reason for that is that the children were raised in privileged households where they environmental life was much higher than normal. And so this bumped up their measured results. But as the children aged into adolescence and adulthood - their IQ's regressed mostly down to their genetic components. By contrast you'd see the exact opposite for those who came from poor upbringings. Environmental factors would result in a disproportionately low IQ when they were young, but this would gradually increase as they aged.

          • (Score: 2) by FatPhil on Thursday February 20 2020, @03:04AM (1 child)

            by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday February 20 2020, @03:04AM (#960155) Homepage
            > They took a large number of well educated, wealthier whites who were interested in adopting black children.

            > Adopted children who came from two black parents had an average IQ at age 17 of 83.7. Adopted children who came from two white parents had an average IQ at age 17 of 101.5.

            Dending what you mean by "came from", one of those sets wasn't even in the study, according to your description of it:
            If you meant "genetically from", the there were no adopted "black children" who "came from two white parents";
            If you meant "raised by", the there were no adopted "black children" who "came from two black parents".

            Clarity is paramount when communication science. You do not display clarity, so you're not communicating science with your above post, one has to even question whether you've understood the science, if you're so unable to communicate it.
            --
            Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
            • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @05:26AM

              by Anonymous Coward on Thursday February 20 2020, @05:26AM (#960205)

              I think it was obvious I meant genetically from. The distinction was not tautological. It was to draw the distinction between those genetically from 2 black parents, or 1 black parent and 1 white parent. The "accidental" control in that study, makes this an even more critical point of distinction.

      • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @05:46PM

        by JoeMerchant (3937) on Wednesday February 19 2020, @05:46PM (#959928)

        But because of political correctness, you can't use it.

        It's not just political correctness, it's the law.

        --
        🌻🌻 [google.com]
      • (Score: 1, Interesting) by Anonymous Coward on Wednesday February 19 2020, @07:10PM

        by Anonymous Coward on Wednesday February 19 2020, @07:10PM (#959967)

        the fact that a people who make up 12% of the population are responsible for about 50% of homicides and other violent crimes.

        Those fucking violent bastards of Aryan/Teutonic descent! I say its time to lock them up and turn on the gas! Yes, the Final Solution is White Genocide. They will only do it again, if we let them live. And algorithm said so.

      • (Score: 5, Insightful) by meustrus on Wednesday February 19 2020, @07:23PM (10 children)

        by meustrus (4961) on Wednesday February 19 2020, @07:23PM (#959970)

        There's a medical diagnostic algorithm that predicts the likelihood of a hospital-acquired infection based on imaging (I don't remember the specifics - don't mod this Informative).

        It was performing better than expert doctors interpreting the same images. Then, researchers took a look at where in the images the AI was looking.

        It turns out the AI was looking down in the bottom right corner, where there was a watermark showing which facility produced the image.

        There is a high correlation between the hospital and the presence of hospital-acquired infections. But even though this is a valid criteria in the global sense, it makes a very poor AI when looking at the population of a single hospital.

        Allowing racial discrimination in criminal justice has the same practical problem. It might produce a high correlation when used globally. But how is it supposed to determine anything about criminals in a racially-uniform population? What if you ported the AI to another state where different racial politics produce different correlations?

        These are some of the real, practical, scientific reasons to disallow racial discrimination in AIs. We want to know the underlying factors so we can generalize them, not just draw surface-level conclusions based on factors we would like to ultimately fix.

        Consider this: If the sentencing algorithm absolutely sentences blacks more punitively than it sentences whites, what does that do to the overall system? Does it help or hurt the goal of reducing crime generally? Have African-Americans historically responded to increased penalties by committing fewer crimes, or have they responded by losing faith in the criminal justice system generally and creating subcultures insulated from the wider system?

        ---

        I can't prove this, but I suspect the reason you would like to use racial discrimination is because it is politically incorrect. There is an idea going around that certain methods which are avoided on moral grounds, like racial discrimination, torture, and eugenics, are only avoided on moral grounds. That these methods would technically be effective, and we are just too soft or weak to use them.

        This is false. Morality is a complex thing, and any moral system with any longevity is based on something more than just feels.

        Consider the ubiquitous immorality of murdering strangers and stealing their belongings. Are we simply too weak to allow this clearly superior method of getting ahead?

        No. Murdering strangers and stealing their belongings makes it nearly impossible to build an advanced society, because accumulating the resources to do so will make you a target for murder and theft.

        Similarly, racial discrimination perpetuates a criminal underclass, torture generates far more lies than truths, and eugenics breeds genetic diseases.

        These things aren't just immoral because we can't stomach them. They're immoral because they do real harm to society.

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
        • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @08:19PM (6 children)

          by Anonymous Coward on Wednesday February 19 2020, @08:19PM (#959988)

          I fully agree with you that any moral system with longevity is based on something more than just feels. In my opinion that "something" is indeed the exact opposite of feels - objective truth. Everybody claims to want to know the truth, but we don't really - because sometimes the truth sucks. 'So what'd you think of my speech?' 'How do I look in this dress?' Softball examples, because the ones that matter genuinely hurt to the point that we try to push out of our mind.

          So in this topic the goal is straight forward. You have an algorithm. You want to improve the accuracy of this algorithm. Is the algorithm going to be more, or less, accurate by choosing to exclude race? Why not also start sentencing women the same as we do men for "fairness"? Few would argue against the fact that men are more naturally predisposed to criminality than women. Somehow put a billion babies on a new planet with no knowledge of anything off that planet, and you're going to see the men -both the weak and the strong- as the aggressors.

          It's a matter of genetics. I do not believe such inherent proclivities end at genders. And I think if you look at the data, worldwide, there is no logical case for such a belief. It's something we want to believe because it makes us feel better. I mean I'd love to not believe what I do. Exact same way I felt as turned from a Christian to some sort of an agnostic if not atheist. I didn't want to believe that, but I believe that as a thinking human I should always be guided, first and foremost, by the truth and my mind - even if I don't like where it takes me.

          • (Score: 4, Insightful) by meustrus on Wednesday February 19 2020, @09:58PM (4 children)

            by meustrus (4961) on Wednesday February 19 2020, @09:58PM (#960031)

            You want to improve the accuracy of this algorithm.

            No, I want to reduce recidivism. A biased algorithm is counter to that purpose.

            Why not also start sentencing women the same as we do men for "fairness"? Few would argue against the fact that men are more naturally predisposed to criminality than women.

            I would argue for equal time for an equal crime. Under your framework, that would (fairly) result in lesser sentencing for women. We don't need to "correct" this imbalance any more than Harvard needs to "correct" for too many Asians scoring well on standardized tests.

            But under what circumstances is recidivism likely? If you allow race to dominate the algorithm, it will detect the correlation and stop there. Remove race from the calculations, and I would expect it to pick up on more subtle differences in social situation, criminal connections, economic opportunity outside of crime, and willingness to admit wrongdoing and pay the price for those mistakes.

            That's just what I'd expect to see. I am a mere human, though, and apparently this "algorithm" is better at interpreting those variables than I am.

            What if the reason the algorithm is more accurate is because it disqualifies race as an input? Consider that when you or I see the mugshot of a black man, we might be more likely to instantly associate him with a bunch of the negatives I mentioned earlier: lack of a social support structure, links to gang members, little possibility of finding honest work, stubborn defiance to accept the judgement of the court. If the black man was actually an active churchgoer going to community college who got picked up on a drug bust when he went home over spring break, those associations would be dead wrong.

            And in this situation specifically, the system would punish exactly the kind of man we need influencing his old neighborhood, bringing home wealth, economic opportunity, and hope. It can have an impact on the crime rates of an entire community.

            As humans, our first impression of him as a typical thug would be hard to shake. Who's to say AI doesn't have the same first-impression bias? Wouldn't it be harsher on the man just because he's black, despite all the evidence he's trying to be a productive member of society?

            If a white man and a black man are living exactly the same life, the AI should be able to pick up on those details and make the same decision. Allowing it to know that the black man is black, and therefore associated with a bunch of gangbangers in a completely different situation, is just going to confuse things.

            --
            If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
            • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @07:16AM (3 children)

              by Anonymous Coward on Thursday February 20 2020, @07:16AM (#960226)

              Ah! I think you're not seeing both sides of the coin here. Algorithms don't just "detect a correlation and stop there", everything is given an appropriate *RELATIVE* weighting. By considering race you can actually reduce unfair sentencing for certain groups. Imagine, for instance, being associated with individuals who have engaged in crime, is *as an aggregate* a high predictor for future recidivism. Yet imagine when you look at only the black population that they tend to have a disproportionately high rate of being associated with individuals who have engaged in crime. If that trait is not as predictive for blacks, then the algorithm will reduce its weighting for it when dealing with a black individual - which can result in more fair sentencing.

              Yet if you exclude race, then it would result in harsher sentences for blacks due to the disproportionate (but not necessarily predictive) dominance of this trait in blacks. Do you understand what I'm saying? This is a big part of the reason that whole "biased algorithms" was ironic nonsense mostly spread by people who had no clue how things such as neural nets work. Algorithms are not based, they simply take data and derive conclusions. Trying to limit that data to achieve desired outcomes is exactly how you bias algorithms, and it's a good way to have everything blow up in your face and actually become biased.

              In other words, Accuracy = Accuracy.

              As for sentencing, equal time for equal crime sounds like the ideal until you consider reality. Two people are arrested for the same crime. Let's imagine that somehow you know every statement they make is genuine. One is repentant and expresses desire to reform and make amends. The other is remorseless and expresses a desire to get out so he can "off that bitch" who "got him arrested." This is why women receive much lesser penalties for the exact same crimes than men do. They tend to fall disproportionately more into the former group than the latter. This is also why sentencing takes into account past history. There's a difference between somebody who made a mistake and somebody who is clearly living a life of crime. The latter poses a much bigger threat to society and *not* keeping them locked away as long as possible is doing little more than punishing whoever their next victim will end up being.

              • (Score: 2) by meustrus on Thursday February 20 2020, @11:05PM (2 children)

                by meustrus (4961) on Thursday February 20 2020, @11:05PM (#960482)

                Neural networks are not perfect. If you allow them to find the cheap correlation, they will use the cheap correlation. They will see the rare black man whose gang associations are completely behind him as an outlier that can be safely ignored. Just like humans do.

                More importantly, the AI needs to be able to handle changing conditions in the future. What if its trainers hadn't provided it with any examples of upstanding black citizens who were in the wrong place at the wrong time? What if there are certain crimes where those outliers just haven't existed yet?

                If allowed to use racial bias, the AI will assume that he's more guilty because he's black, not that he's less guilty because the crime suddenly looks more like a completely different pattern.

                The practical effect could easily be to insert racial bias into areas that haven't had significant case studies. I'd hardly consider it correct to say that black men who committed financial crimes are more likely than similar white men to do so again, but an AI trained on race might well make that leap.

                Remember my original medical example. Just because the AI can predict outcomes really well based on the watermark from the lab that did the test does not mean that's a useful answer. We were trying to look at the medical facts, not the statistical context of those facts.

                --
                If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
                • (Score: 0) by Anonymous Coward on Friday February 21 2020, @07:13AM (1 child)

                  by Anonymous Coward on Friday February 21 2020, @07:13AM (#960622)

                  "AI" is such a peculiar field in that people who clearly have 0 experience in it, constantly think their let's say 'intuitive' understanding is somehow relevant. It's not. Most of everything you just said above is plainly wrong. The entire "magic" behind neural nets is precisely because of their ability to efficiently produce RELATIVE correlations that quickly fall outside our ability to mentally retain (let alone visualize or see at a glance). For instance there's only one arabic numeral that has a "/" in it. And that's a 7. However, it's not like the network will pick up on pixels 3,5,7 (if we're looking at a simplified 3x3 convolution of an entire image) and go 'wellp, must be a 7!' No, it instead looks at correlations between that specific datum and how it relates to its mapping of other numbers. For instance it could a poorly written 1 or part of an 8, etc.

                  Your 'don't let the "AI" know he's black' is exactly how you get incorrect and heavily biased outcomes. Biased input = biased output. And the fun thing is, is that the bias doesn't work like you might think. It could result in less severe sentencing, but it could also easily result in more severe sentencing. In my opinion the latter is probably more likely due to the fact that even the blacks less inclined towards recidivism are going to be more likely to be surrounded by environmental factors correlated with recidivism in other races.

                  So for instance, let's imagine you want the algorithm to stop telling you things are 7s (even when they are). So you force it to stop considering pixels 5 and 7, which happen to be strongly correlated with 7s. You're not going to get that result. You're just going to get some pretty stupid results with a very good chance that it's going to start showing you even more sevens since now factors that were not strongly correlated with 7 previously, suddenly become so. For instance anything with pixels 123 set is now very likely a 7, because of how you've biased your input.

                  If it was not clear in the above examples I was referencing a grid of the style:

                  123
                  456
                  789

                  This is, once again, why the pursuit of "truth" is the one and only correct ideology. You don't "fix" the truth by hiding parts of it. You just end up with even more broken results which you then flail about uselessly trying to fix with even more lies.

                  • (Score: 2) by meustrus on Monday February 24 2020, @02:35PM

                    by meustrus (4961) on Monday February 24 2020, @02:35PM (#961833)

                    "AI" is such a peculiar field in that people who clearly have 0 experience in it, constantly think their let's say 'intuitive' understanding is somehow relevant. It's not.

                    That cuts both ways, AC. For what it's worth, I am a software engineer with limited but direct experience with the simpler AI models.

                    The thing is, nobody really understands how a trained deep learning AI comes to its conclusions. Some people understand the theory and the implementation. But it's nothing but hubris to insist that the the AI is actually always correct.

                    Take the recent instance of Tesla Autopilot being fooled into thinking a 35 speed limit is actually 85. That problem is exactly in line with the common problem you brought up. The AI only knows what it's been trained on, and nothing more. It's trainers didn't think to include a 3 with a super long middle line, so it had to guess. It guesses wrong.

                    In this case, race is demonstrably not a deciding factor in recidivism. It is not any of pixels 1-9. It is a correlating factor. It's like there was a pixel 10, and whenever there is a pixel 10, the number is a 7 50% of the time. If you include the correlating factor, the AI will misinterpret poorly written 1s as 7s whenever pixel 10 is set.

                    Is that right? Statically, yes. It is much more likely to be a 7 if you consider the correlating factor. But that doesn't mean it's factually correct or useful.

                    Under your dogma, we should throw in all the correlations we can. Sexual orientation. Blood type. Astrological sign. What's the harm? It's all just more facts, right?

                    It isn't. These are all irrelevant categories that will probably produce some statistically significant correlation, but which are nothing but dangerous distractions.

                    The information they add is inextricably linked to the human bias that developed the system. We predetermined that these categories are statistically significant, and they cut people into such broad groups with somewhat similar circumstances that they are sure to produce some correlation.

                    But it's a lie to tell the AI that this correlation is more significant than all the others that were left out. The only correct realization of your information dogma would be a quark-by-quark full body subatomic scan. Anything lower resolution is pre-filtered.

                    Honestly, it's questionable to use an AI with reduced information at all. That's why Tesla Autopilot is worse at reading 3s than humans.

                    But to the extent we can make do with limited information, we need to be careful about what information we predetermined is significant. Nonessential correlating factors, especially those based on 19th century pseudoscience like race, must be limited as much as possible.

                    --
                    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
          • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @07:01AM

            by Anonymous Coward on Thursday February 20 2020, @07:01AM (#960220)

            I fully agree with you that any moral system with longevity is based on something more than just feels.

            And yet that's what all of them are ultimately based on anyway, even if they try to claim otherwise: Subjective values.

        • (Score: 2) by HiThere on Wednesday February 19 2020, @08:32PM (2 children)

          by HiThere (866) Subscriber Badge on Wednesday February 19 2020, @08:32PM (#959992) Journal

          FWIW, eugenics doesn't necessarily breed genetic diseases. It depends drastically on implementation. (More usually it just forwards some group's political agenda.)

          OTOH, rationally these days one would propose extensive use of CRISPR over eugenics to eliminate, say, color blindness.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2) by dry on Thursday February 20 2020, @06:32AM (1 child)

            by dry (223) on Thursday February 20 2020, @06:32AM (#960217) Journal

            Strictly speaking, the incest laws could be labeled a type of eugenics.

            • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @07:04AM

              by Anonymous Coward on Thursday February 20 2020, @07:04AM (#960221)

              As well as violations of bodily autonomy and individual liberty, assuming the relationships are consensual.

      • (Score: 4, Insightful) by driverless on Thursday February 20 2020, @12:36AM

        by driverless (4770) on Thursday February 20 2020, @12:36AM (#960094)

        Another issue with the paper, not necessarily a problem but something to bear in mind, is that it was conducted in the US where the focus is on incarceration. In a large number of non-US countries the focus is on rehabilitation, so recidivism rates are much, much lower than in the US. This means that, regardless of the merits or otherwise of the study, it's only applicable to the US (and, presumably, other countries that focus on mass incarceration).

    • (Score: 3, Interesting) by JoeMerchant on Wednesday February 19 2020, @05:42PM (8 children)

      by JoeMerchant (3937) on Wednesday February 19 2020, @05:42PM (#959926)

      Who serves on parole boards? Why do they serve on parole boards? Do you think the recidivism potential is even 25% of the actual consideration in the parole decision?

      --
      🌻🌻 [google.com]
      • (Score: 3, Interesting) by ikanreed on Wednesday February 19 2020, @06:50PM (7 children)

        by ikanreed (3164) Subscriber Badge on Wednesday February 19 2020, @06:50PM (#959961) Journal

        1. In my state? State level politicians.
        2. You don't get voted out of office
        3. Yes, but it's certainly not the only criteria by any means.

        • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @07:35PM (6 children)

          by JoeMerchant (3937) on Wednesday February 19 2020, @07:35PM (#959973)

          My local school board political hopeful summed it up most clearly one Tuesday night:

          "I will not spend any dollar that I am not required to spend, by law."

          Playing to her base, exactly what they want to hear. Same for State (and any other, these days) level politicians: they release the ones that will get them votes for releasing them early, and they retain the ones that will get them votes for retaining them. The voting public doesn't actually care as much about recidivism as their prejudiced perception of what are good and what are bad people to let back on the streets.

          --
          🌻🌻 [google.com]
          • (Score: 2) by ikanreed on Wednesday February 19 2020, @07:53PM (5 children)

            by ikanreed (3164) Subscriber Badge on Wednesday February 19 2020, @07:53PM (#959980) Journal

            (They're appointed, not elected)

            • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @08:06PM (4 children)

              by JoeMerchant (3937) on Wednesday February 19 2020, @08:06PM (#959983)

              Appointed by whom? Political appointees are even more politically polarized than the candidates themselves.

              --
              🌻🌻 [google.com]
              • (Score: 3, Interesting) by ikanreed on Wednesday February 19 2020, @08:19PM (3 children)

                by ikanreed (3164) Subscriber Badge on Wednesday February 19 2020, @08:19PM (#959987) Journal

                The governor.

                And while you're not wrong, that particular case tends to mean ideology rather than populism, which while similar in a general sense of politics affecting behavior is distinct enough to warrant characterizing, as there's not a need to constantly wave each "success" in unholding ideology in front of voters.

                And it can be tempered with the expectations of the office, particularly when impartiality is supposed to be important. It depends on the resiliency of your institutions, and honestly trust in those is at an all time low, and for good reason.

                • (Score: 4, Informative) by JoeMerchant on Wednesday February 19 2020, @08:41PM (2 children)

                  by JoeMerchant (3937) on Wednesday February 19 2020, @08:41PM (#959997)

                  Lots of Governors get, and hold, office with a "tough on crime" PR campaign. Texas seems to love Governors who preside over record numbers of executions. And, although race is a "protected from discrimination" trait under federal law, that doesn't stop the locals from skewing things based on race just as far as they can without tripping the federal statutes into action - no, they don't call it racially based, that's just how it works out when you examine the final statistics.

                  A friend of ours is a Harvard PhD psychologist, slumming it in the local drug rehab program. She's only funded to process about 15% of the local cases, the other 85% just go straight to the pen. Her program's recidivism rates are less than 25% of the recidivism rates for convicts who don't get into her program, but... her funding is perpetually jerked around, mostly down. The judges, and the county that elects them, don't really care - they get some federal dollars for running her program, and that's basically the only reason they do it. There are cops in that county who joined the force with the sole agenda of bustin' niggers on drug charges - and that's basically all they do, all day long. Traffic stops (profiled, of course) are just an excuse to search for drugs. They're representative of a lot of the voters, not everywhere, but there - and a lot of other "red leaning" counties.

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by ikanreed on Wednesday February 19 2020, @08:48PM (1 child)

                    by ikanreed (3164) Subscriber Badge on Wednesday February 19 2020, @08:48PM (#960001) Journal

                    Yeah, and I wish we, collectively, were better, but it's not like you're gonna have law enforcement of all things done some other way than through politics. There's no social cadre of educated elites I'd trust with questions of right and wrong more than the stupid, asinine, often outright insane general public. Democracy sucks, but not as bad as much as being policed by your "betters".

                    • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @09:25PM

                      by JoeMerchant (3937) on Wednesday February 19 2020, @09:25PM (#960017)

                      here's no social cadre of educated elites I'd trust

                      Oh, if anything they're worse - mostly because they're so perpetually sure that they are right. Even after having a huge in their face example of: you didn't know WTF you were talking about, they double down on just how right they are sure they are the next time.

                      I just wish there was a way to equalize the results - any time a policy comes down unevenly across the population, the self-identified suffering minority gets to pick any group that supported the policy to suffer the same effects. Sort of the: I'll divide this ice cream and you get to pick who gets which half scheme.

                      --
                      🌻🌻 [google.com]
  • (Score: 2, Funny) by Anonymous Coward on Wednesday February 19 2020, @03:04PM (5 children)

    by Anonymous Coward on Wednesday February 19 2020, @03:04PM (#959867)

    if (offender.color==orange) {probability = 1;}

    • (Score: -1, Offtopic) by Anonymous Coward on Wednesday February 19 2020, @03:35PM

      by Anonymous Coward on Wednesday February 19 2020, @03:35PM (#959880)

      Previous line:
      orange = "black";

    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @04:17PM (3 children)

      by Anonymous Coward on Wednesday February 19 2020, @04:17PM (#959896)

      if (offender.color≠white) {probability = 100%;}

      • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @04:28PM (1 child)

        by Anonymous Coward on Wednesday February 19 2020, @04:28PM (#959900)

        if (offender.color≠white || offender.address==NULL) {probability = 100%;}

      • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @02:56AM

        by Anonymous Coward on Thursday February 20 2020, @02:56AM (#960151)

        So albinos are the only trustworthy humans? Seems a bit specious.

  • (Score: -1, Troll) by Anonymous Coward on Wednesday February 19 2020, @03:56PM (7 children)

    by Anonymous Coward on Wednesday February 19 2020, @03:56PM (#959887)

    tools can help judges identify and potentially release people who pose little risk to public safety.

    So... not Democrats considering the refusal of their officials to enforce laws?

    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @05:34PM (3 children)

      by Anonymous Coward on Wednesday February 19 2020, @05:34PM (#959918)

      Around here it's the Republican Sheriff that refuses to enforce laws. This is NY State, we have gun laws, the Sheriff is on record saying that he won't enforce them.

      • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @10:07PM (2 children)

        by Anonymous Coward on Wednesday February 19 2020, @10:07PM (#960037)

        Are these laws constitutional?

        • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @01:18AM

          by Anonymous Coward on Thursday February 20 2020, @01:18AM (#960111)

          We may find out soon--
          https://en.wikipedia.org/wiki/New_York_State_Rifle_%26_Pistol_Association_Inc._v._City_of_New_York [wikipedia.org]

          For the meantime, they are the laws of NY State and the Sheriff works for the state, so I believe he should do his job-it's not for him to interpret law.

        • (Score: 1, Insightful) by Anonymous Coward on Thursday February 20 2020, @01:55AM

          by Anonymous Coward on Thursday February 20 2020, @01:55AM (#960130)

          Clearly they are not. Any law that controls guns contradicts "Shall not be abridged."

    • (Score: 4, Insightful) by edIII on Wednesday February 19 2020, @09:13PM (1 child)

      by edIII (791) on Wednesday February 19 2020, @09:13PM (#960011)

      Like the Republicans refusing to enforce the Constitution by removing Orange Anus? Those fundamentals?

      Like the Republicans refusing to allow people to vote them out by gerrymandering districts to retain their power based on racist agendas? You know, the stuff on that hard drive......

      Like the Republicans refusing to allow a Democratic President to elect the Supreme Court justices he was absolutely fully entitled to do so by law?

      Like the Republicans refusing to to address voting security and Russian interference because it temporarily aligns with their goals?

      Like the Republicans redefining government in realtime to justify clearly illegal acts that are performed by banana republics? You know the kind of countries we've put in our movies for decades as "shitholes"?

      Uhuh. But Democrats bad! Liberuls bad! Socialism=Comminism! SQuuuueeeel like a pig boy! Sooowwwwwweeeeeee!

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 1, Insightful) by Anonymous Coward on Wednesday February 19 2020, @11:44PM

        by Anonymous Coward on Wednesday February 19 2020, @11:44PM (#960079)

        If you look above you'll find a Jordan Peterson pupil flirting with eugenics and an AC immediately going as racist as possible with it.

        They are here, they don't want to admit they're racist, they don't want to admit Trump was a mistake. In fact they LOVE his racism, they LOVE that he is hurting brown people on our border. They LOVE that he does what he wants and gets away with it, cause that is what these authoritarian hate filled assholes WANT!

        Now I won't say its a majority of SN by any means, but there is a small minority of batshit crazy Trumpettes, then you have the middling centrists like Runaway, TMB, Fustakrakchkch, Khallow, etc. who support the minority of batshit crazy assholes and never chime in against the vile shit posted around here so frequently.

        The good men are doing nothing, waiting for literal gas chambers before they'll get it through their thick skulls what is going on.

        The only hope seems to be overwhelming their methods of scamming the election in 2020 and implementing the voting security legislation that Mitch McConell is blocking. Seriously, we have so many traitors just on this site undermining the values of freedom and democracy. Some simply playing propaganda games, the child's version of election fraud. It saddens me.

    • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @09:15AM

      by Anonymous Coward on Thursday February 20 2020, @09:15AM (#960235)

      The crimes that people care about are mostly enforced, the main exceptions being white-collar crimes that basically aren't enforced because the criminals are generally well connected and can afford attorneys.

      The government always has to set priorities about which crimes to focus on and which ones to let slide. There's been a ton of focus on non-violent drug offenses when crimes like those committed by white-collar criminals that do ruin lives are let go. The only reason that Madoff wound up in prison for so many years is that he made the mistake of stealing from the rich and powerful rather than poor people with no money to hire attorneys.

  • (Score: 1, Offtopic) by Runaway1956 on Wednesday February 19 2020, @04:39PM

    by Runaway1956 (2926) Subscriber Badge on Wednesday February 19 2020, @04:39PM (#959903) Journal

    Algernon 'Consistently' More Accurate than People in Predicting Recidivism, Study Says

    If it makes zero sense to you, I suggest reading a short story. https://en.wikipedia.org/wiki/Flowers_for_Algernon [wikipedia.org]

  • (Score: 5, Interesting) by KilroySmith on Wednesday February 19 2020, @05:28PM (2 children)

    by KilroySmith (2113) on Wednesday February 19 2020, @05:28PM (#959916)

    So if we were intelligent animals, we would be going down this path in order to determine which offenders need the most help after release to avoid recidivism. Instead, sadly, we'll use this to algorithmically choose who get freed early, and who serves out every minute of their sentence and gets followed by the police after release so they can be tossed back in the clink the moment they spit on the sidewalk. And, of course, the algorithm will determine that rich white men in politics or finance will be the "least likely" to offend, so they'll all get released early or never get sentenced to jail at all. So, no different than today.

    • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @08:44PM

      by JoeMerchant (3937) on Wednesday February 19 2020, @08:44PM (#959999)

      if we were intelligent animals

      ... life would be so much simpler. Predictive behavior models would actually work. Incentive programs would work as intended. We might even structurally eliminate the tragedy of the commons.

      'tis not the world we live in.

      the algorithm will determine that rich white men in politics or finance will be the "least likely" to offend

      Don't need an algorithm for that, the judges (and the rest of them) recognize who is most likely to further their political aspirations and they treat them accordingly - already.

      'tis the world we live in.

      --
      🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Friday February 21 2020, @07:53AM

      by Anonymous Coward on Friday February 21 2020, @07:53AM (#960629)

      If we were intelligent animals, we wouldn't need prisons. We do because there's a lot of very stupid and very dangerous people with no self control and no ethical or moral compass.

      I could, in general, not care less about all the white collar crime or drug crime in the world. Live in a world where you should only place your money with those you trust, and people are free to do whatever they want to their own bodies? Sure, why not. It's the violent, petty, and idiotic crime that bothers me. It's the reason you need to lock down every single frigging thing you value - your home/car/bike/etc, why it's dangerous to walk down the street in most cities in "urban" areas at night, and so much more. It's a bit paradoxical. White collar crime, in terms of dollar amount, is almost certainly a much larger burden on society than petty crime - yet it's the latter that completely screws up society.

      For instance I live in a developing nation. And it's absolutely amazing what life is like in a nation where that sort of petty crime that's ubiquitous in the US is practically non-existent. There's one food court I quite like to eat lunch at. I'd say it's in a business district but it's also like a 2 minute walk from a red light district, so don't get the wrong impression. Anyhow the business folk also like to come there to eat lunch. It gets jam packed. Know how they claim tables? Generally by laying their purse/wallet/ID/etc on it. For somebody who spent their entire life in the US, this was like a scene from another planet. But it wasn't another planet - just a nation without a bunch of low IQ psychopaths screwing everything over for everybody else.

  • (Score: 2) by wisnoskij on Wednesday February 19 2020, @05:39PM (3 children)

    by wisnoskij (5149) <{jonathonwisnoski} {at} {gmail.com}> on Wednesday February 19 2020, @05:39PM (#959924)

    America has been trying to add algorithms to gauge reactivism for decades, but everytime they do they find out that the algorithms are racist.

    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @06:03PM

      by Anonymous Coward on Wednesday February 19 2020, @06:03PM (#959938)

      Yup, it seems like you [idiot] pigeonholed yourself correctly:

      https://www.urbandictionary.com/define.php?term=reactivism [urbandictionary.com]
      > reactivism
      > The philosophy of engaging in political activism but limiting that engagement to posting on social media.

    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @06:18PM

      by Anonymous Coward on Wednesday February 19 2020, @06:18PM (#959947)

      If the shoe fits, then it's either true and/or racist.

    • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @09:42PM

      by Anonymous Coward on Thursday February 20 2020, @09:42PM (#960455)

      Because they're studying recidivism in a society that has systematically oppressed it's black population for centuries. A convicted felon of any other race will have issues, but the discrimination is much stronger for blacks. So, once a black person catches a criminal conviction (which is relatively likely, considering the discrimination and prejudice), they must either work incredibly hard and still likely fail to achieve anything, or surrender to reality and become a criminal. Morality looks a lot different when your perspective is from the bottom of the pile.

  • (Score: 3, Interesting) by meustrus on Wednesday February 19 2020, @07:39PM (10 children)

    by meustrus (4961) on Wednesday February 19 2020, @07:39PM (#959974)

    Is this article using "algorithm" as a synonym for "artificial intelligence"?

    Because those aren't the same thing.

    "Algorithm" is a mathematical formula, intelligently designed by humans aware of its implications.

    "Artificial intelligence" is a black box of inputs and outputs, evolved based on typically proprietary datasets.

    An algorithm is scientific, because it can be independently reproduced, critiqued, and improved. An AI is not scientific, because it can never be perfectly reproduced, its workings are not well understood, and the only way to improve it is to build a brand new one.

    I'm troubled by the increasing tendency of tech journalism to conflate these two. The way to elevate human capability is with the scientific method, not with pseudorandom Chinese Rooms [wikipedia.org].

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @08:34PM

      by Anonymous Coward on Wednesday February 19 2020, @08:34PM (#959993)

      All AI is, is glorified regression analyses over an arbitrary number of variables. Algorithm is much more appropriate a term than AI, and I think the growing trend towards using it is a reflection of the increasingly evident limitations of AI. And yes - "AI" is 100% replicable. And yes you can improve it incrementally - that's where 99% of the work comes in, in fact. For instance you generally need to 'massage' both your input and output domains. And that's very much a process of incremental refinement.

      I worked in fintech. Built a system that crushes humans. Am more cynical than ever on the future of AI. For instance I'd set the over/under line on a true self driving 'free range' (e.g. - not just driving on premapped/processed routes) vehicle as 20 years. I'd take the over.

    • (Score: 3, Insightful) by HiThere on Wednesday February 19 2020, @08:43PM (1 child)

      by HiThere (866) Subscriber Badge on Wednesday February 19 2020, @08:43PM (#959998) Journal

      That's a good distinction, but I disagree that with an algorithm "humans " are " aware of its implications.". They're usually aware of some of it's implications, but rarely of even most, and here I'm just considering the subset called mathematics, where the implications are (often) in principle knowable.

      That said, one can reasonably tinker with an algorithm to improve it. One doesn't need to know all the implications to make improvements in the places one can measure. But this often comes at a cost in the places one isn't measuring.

      As for the difficulty of understanding the implications of an algorithm, I suggest you contemplate Conway's "Game of Life" as one that has been extensively studied.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @09:37PM

        by JoeMerchant (3937) on Wednesday February 19 2020, @09:37PM (#960020)

        But this often comes at a cost in the places one isn't measuring.

        Free enterprise, in a nutshell.

        --
        🌻🌻 [google.com]
    • (Score: 2) by JoeMerchant on Wednesday February 19 2020, @08:54PM

      by JoeMerchant (3937) on Wednesday February 19 2020, @08:54PM (#960003)

      I'm troubled by the increasing tendency of tech journalism to conflate these two.

      Get used to it. Tech journalism has always been (at least) two steps behind the leading practitioners in the field, and in this field most of the leading practitioners barley know what they are doing.

      --
      🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Wednesday February 19 2020, @11:56PM (4 children)

      by Anonymous Coward on Wednesday February 19 2020, @11:56PM (#960085)

      If you want to be technical

      Heuristics - Something that generally, but not always, gets the right answer.

      Algorithm - Something that always gets the right answer. It could be a mathematical formula used under the right circumstances such as the law of cosines

      AI - More difficult to define but I think of it as something that seeks a desired outcome (probably using heuristics since it would get the desired outcome with algorithms each time) and if it doesn't get the desired outcome it tries to do more computations to figure out how it could get the desired outcome the next time it runs into the same situation and it stores some information to help it avoid getting an undesired outcome next time. It tries to find better answers for next time, or close approximations to the right answer.

      An example could be a chess AI. The right answer would be a perfect move but it may require way too much computation. But if the computer sees that making a specific move under a specific condition causes it to lose it would try to figure out a better move and store the results for future games. It gets an answer that's closer to the perfect or 'right' answer, a better answer or an answer that more closely approximates a perfect answer (or the right answer).

      • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @12:01AM

        by Anonymous Coward on Thursday February 20 2020, @12:01AM (#960087)

        It tries to find better answers for next time, or closer* approximations to the right answer.

      • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @12:08AM (1 child)

        by Anonymous Coward on Thursday February 20 2020, @12:08AM (#960089)

        Let me delve into this a little bit more.

        An example of a heuristic might be an antivirus that uses checksums to evaluate if something is a computer virus. These heuristics may not always be right but they can generally be right. But if the antivirus is wrong it will be wrong every time (if not updated), it doesn’t have a way to evaluate the outcome to search for a better solution next time.

        A chess AI can evaluate the outcome. Did it win the game or lose. If it lost it can then try to do more computations and store some information to help it win the next time. For something to be intelligent it needs to be able to evaluate the outcome (ie: determine if it’s desired or not) and seek a different set of actions next time if the outcome is undesired so that it can get a desired outcome next time.

        • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @04:12PM

          by Anonymous Coward on Thursday February 20 2020, @04:12PM (#960323)

          Can't ya read the signs, boy? No loitering. No littering. No diving. No delving. Move along now.

      • (Score: 2) by FatPhil on Friday February 21 2020, @12:32AM

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Friday February 21 2020, @12:32AM (#960514) Homepage
        > Algorithm - Something that always gets the right answer.

        Nope. Totally utterly nope. That's so wrong I don't know where to start. It's probably not even wrong.

        Here's my algorithm for working out the best move at chess given an input of a board position (plus ancillae):
        1) If in check move out of check, with a preference to forwards over backwards, then left over right
        2) If in check and the above failed, move the highest valued piece that can block in the way, tie-break on movement forwards, then leftwards
        3) If in check and the above fail, capture the attacking piece with the highest valued piece that capture, tie-break on movement forwards, then leftwards
        4) Else push the backmost outermost pawn that can move without discovering check by one forwards, with a preference of left over right
        5) Else move the backmost outermost piece that can move without discovering or moving into check by the smallest possible (L_inf) distance, with a preference to not capturing over capturing, then forwards over backwards, then left over right

        Precisely what do you think is "the right answer" about what it returns?
        It's well defined, it's deterministic, and it always terminates with a suggested move, so it's most definitely an algorithm. (And most amazingly, I think it even follows the rules - I had to revisit it about 4 times to add more clauses.)
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 0) by Anonymous Coward on Thursday February 20 2020, @05:24PM

      by Anonymous Coward on Thursday February 20 2020, @05:24PM (#960356)

      "Because those aren't the same thing"

      The distinction is whether the algo is known. But yes, they are both algo's. An engineered algo is designed. An AI algo is arrived at by continuous experimentation with the results being maintained in code that was created through repeated (typically randomly seeded) experiment. Ultimate the AI algo can be derived from the state in its neurons, but that is generally not done because why bother.

      So yes, AI is an algo. It is just derived in a different way.

      Regarding the OP, parole boards are a discrimination engine. Their purpose is to discriminate between who will and who won't. The purpose of doing this with AI is to externalize responsibility, not to be more accurate. Institutional racism as a quantitative concept can be retained in an AI based system. An algo can be just as racist as a person. But it does put a nice buffer of plausible deniability between the institutionally-racist organization and the public.

      Which is to say that you are looking at the spawn of skynet. A problem where people are willing to cede their most basic freedoms to a computer, just so they don't have to take responsibility.

  • (Score: 0) by Anonymous Coward on Friday February 21 2020, @04:28AM

    by Anonymous Coward on Friday February 21 2020, @04:28AM (#960586)

    ...or near future.. [youtube.com]

(1)