Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Monday October 28 2019, @10:41AM   Printer-friendly
from the at-what-cost? dept.

Submitted via IRC for soylent_green

A health care algorithm affecting millions is biased against black patients

A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.

The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says "almost every large health care system" is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. "This is a systematic feature of the way pretty much everyone in the space approaches this problem," he says.

The algorithm is used by health care providers to screen patients for "high-risk care management" intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.

To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn't account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by fadrian on Monday October 28 2019, @04:35PM (1 child)

    by fadrian (3194) on Monday October 28 2019, @04:35PM (#912880) Homepage

    Hey! Current connectionist AI's airplane with wings that flap are perfect! After all, birds have wings that flap, so must a plane! Our brains are connectionist in nature, therefore so must be our AI models! Q! E! D! Plus, it's always easy to throw more compute time at a problem...

    So it's the same old, same old... Laziness+trendiness+inertia->what gets studied. I know it's delivered a lot of practical results (and nothing wrong with that), but as a step towards generic AI, it's probably another dead end.

    I'm seeing a standard technology adoption curve developing with systems that are amenable to these approaches - but right now it's looking like just another technology. If you're waiting for generic AI to usher in the singularity, I think you'll be waiting a bit longer.

    --
    That is all.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Non Sequor on Monday October 28 2019, @06:13PM

    by Non Sequor (1005) on Monday October 28 2019, @06:13PM (#912908) Journal

    I’m just waiting for recognition that communicability of results and external credibility need to be a primary design criterion for machine learning. These methods develop strictly nontransferable models.

      I’ll note that notions of position, material, and temp advantage exist in both chess and checkers but to the best of my knowledge, you can’t isolate the portions of a neural net that embody these concepts for either game and use them to seed an AI for the other. Developing abstractions that generalize conclusions to broader circumstances does not necessarily jive with the pure optimization concept of intelligence. As a tactical game, chess teaches the notion that if you use forces on a battlefield effectively, you can find opportunities to eliminate enemies with low risk to your own troops which can be used to further advantage. The AI learns how to do this in a specific rule set but doesn’t encapsulate the principle at play. Descriptions of the play style of recent AI chess programs sound like they may sometimes succeed by abandoning that principle and making moves that appear reckless at circumstances specifically induced by the rules of chess.

    We have a long way to go to be able to see how effectively we are using these kinds of techniques.

    --
    Write your congressman. Tell him he sucks.