Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday October 28 2019, @10:41AM   Printer-friendly
from the at-what-cost? dept.

Submitted via IRC for soylent_green

A health care algorithm affecting millions is biased against black patients

A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.

The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says "almost every large health care system" is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. "This is a systematic feature of the way pretty much everyone in the space approaches this problem," he says.

The algorithm is used by health care providers to screen patients for "high-risk care management" intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.

To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn't account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Non Sequor on Monday October 28 2019, @05:37PM (1 child)

    by Non Sequor (1005) on Monday October 28 2019, @05:37PM (#912894) Journal

    Single thread computational performance seems to be saturated at this point which means that unless P=NC, computational constraints will remain significant for a core of P-complete optimization problems which don’t benefit from parallelism. Techniques dependent on brute force model inference without a priori assumptions will scale poorly which means that while they may be very good for domains where the rules remain fixed, if the rules change significantly so that the trade offs and approximations implicitly embedded in the model change a lot, then it’s back to square one. If the rules are changed so that the new optimal model is more complex, the amount of machine time running your adversarial AI or what have you will grow exponentially with the model complexity.

    There is also the issue that the optimal model for a data set may be different from the optimal communicable model for a data set. For a third party to deem a model prediction credible, it needs to be possible to relate the model’s decision to externally verifiable information. You wouldn’t want to assume legal liability for a model with decision criteria that seem arbitrary and capricious. Even if it’s right, being unable to discern why it is right is a significant problem. Humans already exhibit social structures organized around this reality and the expectation would be that as AI is adapted to deal with it, it will also take on more human limitations.

    Collecting more data has its own limitations. Data collection and curation has real physical costs. You have an inherent bias towards data sources that are either easier to collect or have a more developed data infrastructure. Inference on large numbers of variables is fraught with peril without a priori knowledge or strong strictly linear relationships.

    Quantum systems only offer a square root improvement for most problems. Improvements greater than that rely specifically on embedding modular arithmetic in the phase component of physical quantities. Neuromorphic systems? It’s going to be harder to simulate a brain exactly in silicon than projected because the solutions to the problems involving trade offs between computational density and communications and expend ability of individual elements are different for silicon compared to neural tissue. You don’t know what parameters you can tweak without affecting how it thinks.

    The world is going to continue to be largely inscrutable. You’ll see efficiency improvements in a variety of areas but only limited progress in discerning what the best ways to arrange our affairs might be. We’re still going to be squabbling over things that we think we know.

    --
    Write your congressman. Tell him he sucks.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday October 28 2019, @06:33PM

    by Anonymous Coward on Monday October 28 2019, @06:33PM (#912914)

    The lowest hanging fruit of single thread performance will go up by at least another order of magnitude with clock speed increases and on-chip memory. It will be an expensive lunch, but one we will thoroughly enjoy.

    Quantum, neuromorphic, and other architectures will be used even if only for niche tasks. Neuromorphic also does not have to attempt to simulate a brain or work exactly like a group of neurons, although that is a worthy pursuit that could unlock many benefits.