Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday October 28 2019, @10:41AM   Printer-friendly
from the at-what-cost? dept.

Submitted via IRC for soylent_green

A health care algorithm affecting millions is biased against black patients

A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.

The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says "almost every large health care system" is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. "This is a systematic feature of the way pretty much everyone in the space approaches this problem," he says.

The algorithm is used by health care providers to screen patients for "high-risk care management" intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.

To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn't account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Non Sequor on Monday October 28 2019, @11:37AM (6 children)

    by Non Sequor (1005) on Monday October 28 2019, @11:37AM (#912745) Journal

    We've seen that some people worry about AI being too effective. Recently someone said that you should consider the harm that an AI that is programmed to drive a car from point A to point B might do if it gives no priority to the safety of nonpassengers or property. This is a true concern in one sense, but it's not really any different from the concerns present in heavy machinery. Look at something like farm equipment. You switch it on and it will perform its function regardless of whether it is appropriate to do so. The equipment can be used properly because it has been tested for effectiveness at its intended task, it has safety mechanisms which have also been tested for effectiveness, and the user has some responsibility for using it appropriately. AI isn't any different. It needs to be tested for effectiveness both at its intended task and in terms of safety mechanisms and users need to be responsible for actual deployment.

    Unconstrained general purpose optimization is an intractable problem. In order to solve an optimization problem, you have to have enough prior information about the nature of solutions or impose artificial constraints or allow approximations. These assumptions all require testing regardless of the quality of the methods that you use.

    In the example of medical data sets, an estimator that is optimal relative to the empirical distribution of the data is not optimal relative to the intended use. The empirical data contains situations where a more extensive intervention could have been successful but was precluded by the patient's inability to pay and as a result, fitting to the data will prejudge medical conditions that are correlated with inability to pay. Many people have this image that a perfect AI is a master of deductive reasoning that can identify this kind of distinction and draw conclusions from it but the fact is that deductive reasoning is much less efficient at problem solving than inferential reasoning. Inferential techniques in AI are more successful than deductive techniques and all of the recent development in AI has been centered on methods of using a large number of computational cycles to develop a reasonably effective inferential black box that serves as an estimated model for a data set.

    Deductive reasoning in humans seems to be more heavily used as a means of reducing the complexity of sets of facts by searching for contradictions and deriving common principles that explain multiple facts. People actually use deductive reasoning in reverse and that makes sense from a computational complexity perspective because it's not usually a very effective method in the forward direction. AI is not going to change this set up. AI will allow us to use some of our problem solving tools in a more precise and repeatable manner, but it doesn't eliminate the sources of bias and the inherent difficulty of estimating outcomes on complex systems.

    --
    Write your congressman. Tell him he sucks.
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Monday October 28 2019, @12:37PM (2 children)

    by Anonymous Coward on Monday October 28 2019, @12:37PM (#912757)

    You are going to see orders of magnitude of improvement in computation and data collection, along with an introduction of neuromorphic and quantum systems. Some limitations will be broken.

    That doesn't mean that all problems will be solvable by a computer. Some will amount to predicting the future and solutions will never exceed a certain level of accuracy.

    We are using some very biased and tiny data sets today. People are being paid to collect more while hopefully not introducing more bias.

    • (Score: 2) by Non Sequor on Monday October 28 2019, @05:37PM (1 child)

      by Non Sequor (1005) on Monday October 28 2019, @05:37PM (#912894) Journal

      Single thread computational performance seems to be saturated at this point which means that unless P=NC, computational constraints will remain significant for a core of P-complete optimization problems which don’t benefit from parallelism. Techniques dependent on brute force model inference without a priori assumptions will scale poorly which means that while they may be very good for domains where the rules remain fixed, if the rules change significantly so that the trade offs and approximations implicitly embedded in the model change a lot, then it’s back to square one. If the rules are changed so that the new optimal model is more complex, the amount of machine time running your adversarial AI or what have you will grow exponentially with the model complexity.

      There is also the issue that the optimal model for a data set may be different from the optimal communicable model for a data set. For a third party to deem a model prediction credible, it needs to be possible to relate the model’s decision to externally verifiable information. You wouldn’t want to assume legal liability for a model with decision criteria that seem arbitrary and capricious. Even if it’s right, being unable to discern why it is right is a significant problem. Humans already exhibit social structures organized around this reality and the expectation would be that as AI is adapted to deal with it, it will also take on more human limitations.

      Collecting more data has its own limitations. Data collection and curation has real physical costs. You have an inherent bias towards data sources that are either easier to collect or have a more developed data infrastructure. Inference on large numbers of variables is fraught with peril without a priori knowledge or strong strictly linear relationships.

      Quantum systems only offer a square root improvement for most problems. Improvements greater than that rely specifically on embedding modular arithmetic in the phase component of physical quantities. Neuromorphic systems? It’s going to be harder to simulate a brain exactly in silicon than projected because the solutions to the problems involving trade offs between computational density and communications and expend ability of individual elements are different for silicon compared to neural tissue. You don’t know what parameters you can tweak without affecting how it thinks.

      The world is going to continue to be largely inscrutable. You’ll see efficiency improvements in a variety of areas but only limited progress in discerning what the best ways to arrange our affairs might be. We’re still going to be squabbling over things that we think we know.

      --
      Write your congressman. Tell him he sucks.
      • (Score: 0) by Anonymous Coward on Monday October 28 2019, @06:33PM

        by Anonymous Coward on Monday October 28 2019, @06:33PM (#912914)

        The lowest hanging fruit of single thread performance will go up by at least another order of magnitude with clock speed increases and on-chip memory. It will be an expensive lunch, but one we will thoroughly enjoy.

        Quantum, neuromorphic, and other architectures will be used even if only for niche tasks. Neuromorphic also does not have to attempt to simulate a brain or work exactly like a group of neurons, although that is a worthy pursuit that could unlock many benefits.

  • (Score: 2, Informative) by Anonymous Coward on Monday October 28 2019, @03:27PM

    by Anonymous Coward on Monday October 28 2019, @03:27PM (#912834)

    Someone chose easier-to-obtain value as a proxy for another. It turned out to be a bad proxy in some percent of cases. Happens all the time in programming, "AI" or not.
    What this demonstrates, is the importance of pre-release testing on maximally diverse set of real-life examples. No one can predict all the quirks happening in real data.

  • (Score: 2) by fadrian on Monday October 28 2019, @04:35PM (1 child)

    by fadrian (3194) on Monday October 28 2019, @04:35PM (#912880) Homepage

    Hey! Current connectionist AI's airplane with wings that flap are perfect! After all, birds have wings that flap, so must a plane! Our brains are connectionist in nature, therefore so must be our AI models! Q! E! D! Plus, it's always easy to throw more compute time at a problem...

    So it's the same old, same old... Laziness+trendiness+inertia->what gets studied. I know it's delivered a lot of practical results (and nothing wrong with that), but as a step towards generic AI, it's probably another dead end.

    I'm seeing a standard technology adoption curve developing with systems that are amenable to these approaches - but right now it's looking like just another technology. If you're waiting for generic AI to usher in the singularity, I think you'll be waiting a bit longer.

    --
    That is all.
    • (Score: 2) by Non Sequor on Monday October 28 2019, @06:13PM

      by Non Sequor (1005) on Monday October 28 2019, @06:13PM (#912908) Journal

      I’m just waiting for recognition that communicability of results and external credibility need to be a primary design criterion for machine learning. These methods develop strictly nontransferable models.

        I’ll note that notions of position, material, and temp advantage exist in both chess and checkers but to the best of my knowledge, you can’t isolate the portions of a neural net that embody these concepts for either game and use them to seed an AI for the other. Developing abstractions that generalize conclusions to broader circumstances does not necessarily jive with the pure optimization concept of intelligence. As a tactical game, chess teaches the notion that if you use forces on a battlefield effectively, you can find opportunities to eliminate enemies with low risk to your own troops which can be used to further advantage. The AI learns how to do this in a specific rule set but doesn’t encapsulate the principle at play. Descriptions of the play style of recent AI chess programs sound like they may sometimes succeed by abandoning that principle and making moves that appear reckless at circumstances specifically induced by the rules of chess.

      We have a long way to go to be able to see how effectively we are using these kinds of techniques.

      --
      Write your congressman. Tell him he sucks.