Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday October 28 2019, @10:41AM   Printer-friendly
from the at-what-cost? dept.

Submitted via IRC for soylent_green

A health care algorithm affecting millions is biased against black patients

A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.

The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says "almost every large health care system" is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. "This is a systematic feature of the way pretty much everyone in the space approaches this problem," he says.

The algorithm is used by health care providers to screen patients for "high-risk care management" intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.

To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn't account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by choose another one on Monday October 28 2019, @01:30PM (3 children)

    by choose another one (515) Subscriber Badge on Monday October 28 2019, @01:30PM (#912780)

    If they can demonstrate that equally wealthy white patients get better care than equally wealthy black patients, then they are on to something. Likewise, if they can show that dirt poor white people get better care than dirt poor black people, then they have something. That "something" may or may not be a race bias, but they'll have something.

    Yup. What you have is, in woke terms, "indirect discrimination" - or possibly the correct term is something else this week, apologies I don't really keep up with this stuff all that regularly. What they will have, on the other hand, is another research grant (always follow the money).

    The algorithm is not biased, in fact it's probably colour blind, but as a colour blind person I get offended by that use of the term, so we'll stick with non biased.

    The health care system, however, _is_ biased, in favour of something, or not, and the algorithm uses data from the health care system so the algorithm _result_ is biased, someway somehow, and therefore the algorithm is "wrong". Wrong algorithms need to be corrected by being forced to introduce a corrective bias in favour of whichever group we are in favour of this week, this is now likely to happen. Later, some other group disadvantaged by the now provably biased algorithm, will sue for discrimination because the algorithm is biased - see Harvard admissions for example.

    Who'd be an algorithm maker, or an admissions tutor? - damned if you don't, damned when you do.

    FWIW my opinion is that life would be a whole lot easier and fairer if we just focused on levelling the playing field rather than saying "look this group has been running up hill, we need dig out a downhill slope to compensate for them". Right now the various playing fields of life are dug up all over the place and _everyone_ feels like _they_ are running up hill and as a result pretty much everyone is pissed off with it. What we actually need is a big f-off roller and a big spirit level, flatten the field and then may the best win. But there'll be some loser who isn't happy with a flat playing field either, sigh. Maybe what we actually need is a crate of molotovs and a few large tanks.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2, Insightful) by Anonymous Coward on Monday October 28 2019, @03:28PM (2 children)

    by Anonymous Coward on Monday October 28 2019, @03:28PM (#912835)

    Or to summarize, the algorithm isn't biased, but because there's bias in the data it relies on, it produces biased results.

    Garbage in, garbage out.

    • (Score: 2) by DeathMonkey on Monday October 28 2019, @05:44PM (1 child)

      by DeathMonkey (1380) on Monday October 28 2019, @05:44PM (#912897) Journal

      This is the only correct answer. Wokeness not required.

      • (Score: 2) by FatPhil on Tuesday October 29 2019, @06:57PM

        by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Tuesday October 29 2019, @06:57PM (#913379) Homepage
        It's not just garbage input (into the AI), it's garbage that depends on the previous output (of the AI). Prior use of this model has created the situation that it's now saying is a real thing because of positive feedback. Had initial conditions been even moderately different, a wildly different conclusion could have been drawn, I'm sure.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves