Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday June 01 2022, @09:41AM   Printer-friendly
from the not-a-good-outcome dept.

AI systems can detect patient race, creating new opportunities to perpetuate health disparities:

Can computers figure out your race by looking at your wrist bones or lungs? Yes, according to a study published this month by the prestigious scientific journal, The Lancet Digital Health. That's not the whole story, though: the bigger issue is researchers don't know how the machines do it.

The findings come after months of work by a team of experts in radiology and computer science led by Judy W. Gichoya, MD, assistant professor and director of the Healthcare Innovations and Translational Informatics Lab in Emory University School of Medicine's Department of Radiology and Imaging Sciences. Additional Emory researchers include Hari Trivedi, MD, assistant professor of radiology and imaging sciences, Ananth Bhimireddy, MS, systems software engineer and computer science student Zachary Zaiman. The team also includes colleagues from Georgia Tech, MIT, Stanford, Indiana University-Perdue University and Arizona State, plus experts in Canada, Taiwan and Australia.

The team used large-scale medical imaging datasets from both public and private sources, datasets with thousands of chest x-rays, chest CT scans, mammograms, hand x-rays and spinal x-rays from racially diverse patient populations.

They found that standard deep learning models—computer models developed to help speed the task of reading and detecting things like fractures in bones and pneumonia in lungs—could predict with startling accuracy the self-reported race of a patient from a radiologic image, despite the image having no patient information associated with it.

"The real danger is the potential for reinforcing race-based disparities in the quality of care patients receive," says Gichoya. "In radiology, when we are looking at x-rays and MRIs to determine the presence or absence of disease or injury, a patient's race is not relevant to that task. We call that being race agnostic: we don't know, and don't need to know someone's race to detect a cancerous tumor in a CT or a bone fracture in an x-ray."

The immediate question was whether the models, also known as artificial intelligence (AI), were determining race based on what researchers call surrogate covariables. Breast density, for example, tends to be higher in African American women than in white women, and research shows Black patients tend to have higher bone mineral density than white patients, so were the machines reading breast tissue density or bone minerality as proxies for race? The researchers tested this theory by suppressing the availability of such information to the AI processor and it still predicted patient race with alarming accuracy: more than 90 percent accurate.

Even more surprising, the AI models could determine race more accurately than complex statistical analyses developed specifically to predict race based on age, sex, gender, body mass and even disease diagnoses.

The AI models worked just as well on x-rays, mammograms and CT scans and were effective no matter which body part was imaged. Finally, the deep learning models still correctly predicted self-reported race when images were deliberately degraded to ensure the quality and age of the imaging equipment wasn't signaling socioeconomic status, which in turn could correlate with race. Fuzzy images, high resolution images downgrades to low resolution, and scans clipped to remove certain features did not significantly affect the AI models' ability to predict a patient's race.

[...] The real fear, Gichoya says, is that all AI model deployments in medical imaging are at great risk for causing great harm.

"If an AI model starts to rely on its ability to detect racial identity to make medical decisions, but in doing so produces race-specific errors, clinical radiologists will not be able to tell, thereby possibly leading to errors in health-care decision processes. That will worsen the already significant health disparities we now see in our health care system," explains Gichoya.

And because of that danger, the team already is working on a second study. They will not stop at detecting bias, Gichoya says. "This ability to read race could be used to develop models that actually mitigate bias, once we understand it. We can harness it for good."

Journal Reference:
https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00063-2/fulltext,
(DOI: https://doi.org/10.1016/S2589-7500(22)00063-2)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by bradley13 on Wednesday June 01 2022, @10:09AM (17 children)

    by bradley13 (3053) Subscriber Badge on Wednesday June 01 2022, @10:09AM (#1249393) Homepage Journal

    The information is there. The person's gender is also readable, and probably their rough age.

    If this information is not relevant to the diagnosis, then the system should simply not provide it.

    It's no different from a million other interactions. What can a system know about you from, say, the websites you visit? Want to bet that race is pretty obvious? Where you live - also a good proxy.

    --
    Everyone is somebody else's weirdo.
    • (Score: 1, Informative) by Anonymous Coward on Wednesday June 01 2022, @10:41AM (3 children)

      by Anonymous Coward on Wednesday June 01 2022, @10:41AM (#1249399)

      I wholeheartedly agree with "If this information is not relevant to the diagnosis, then the system should simply not provide it."
      But this is not about the system _providing_ it. This is about the system _incorporating_ it when it shouldn't and then recommending types of treatment(s) for an individual based on this information that is irrelevant. And these would be treatments that were deemed 'good' or 'appropriate' _for that race, based of training data that contained incredibly heavy bias_, or in other words: 'good/appropriate' for race X is not the same 'good' that would have been applied to people of another race. To put it bluntly, the bias encodes: "Person of race X is not worth treatment Y, but person of race Z is worthy thereof". Te bias may not be encoded explicitly like that - although I wouldn't put it past them - but it's implied by the data...

      To pick up on the "Where you live - also a good proxy": you are correct, this is indeed a good proxy for race. In the past, red-lining enforced this. This particular exhibited behavior by the AI system is nothing else but continuing to do redlining but this time on more than just housing and with the ability to say "on a computer" (i.e. we wash our hands in innocence).

      One of the contributing factors for this has been the land-grab for data: just suck everything up and find a way to incorporate every single data point into the calculation, regardless of relevance. Any found correlation must be causation and surely is meaningful, right (http://www.tylervigen.com/spurious-correlations)? It is p-hacking at its worst! And it's driven by the commoditization of our 'selves' by those who seek to be your Digital Feudal Lords.

      The state of AI and ML today is nothing more than a perpetuation of the past. It's not intelligent, it's not learning, but it is artificial all right! What it does give operators of this technology is a way to claim innocence and to refer to the Deus In Machina, the Deified Algorithm That Shall Be Obeyed!

      Too bad that this past it's perpetuating was not exactly our brightest moment, eh?

      • (Score: 2) by HiThere on Wednesday June 01 2022, @03:20PM

        by HiThere (866) on Wednesday June 01 2022, @03:20PM (#1249450) Journal

        No, it's not incorporating the information, it's detecting the information based on ???
        I'm not really sure this should be dealt with. Perhaps the system should just refuse to guess what the race is, and proceed to predict best practices for treatment. But I don't know, and neither do the researchers, because they don't know what the prediction is based on.

        For some reason I'm thinking about the system that was trained to recognize the difference between US tanks and Russian tanks, and did an excellent job. But it turned out that it was deciding based on the time of day the pictures were taken. Once you got away from the training data set it was a total failure.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 3, Interesting) by Mykl on Thursday June 02 2022, @12:48AM (1 child)

        by Mykl (1112) on Thursday June 02 2022, @12:48AM (#1249577)

        If I were to let an AI run amok and scoop up every medical record imaginable, and if the outcome of that were that certain treatments seemed to work better for certain races, I think I'd want to know that and apply accordingly.

        Your statement implies that the developers want the AI system to deliberately withhold treatment from certain races - you'd have to explicitly encode that into its objectives, and I'm sure that isn't the case.

        • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @07:00PM

          by Anonymous Coward on Thursday June 02 2022, @07:00PM (#1249904)

          The concern is the system incorrectly withholding appropriate treatment due to race. Doing as you say and improving treatment by taking into account racial factors is what they mean by 'using it for good'.

    • (Score: 5, Informative) by janrinok on Wednesday June 01 2022, @10:53AM (5 children)

      by janrinok (52) Subscriber Badge on Wednesday June 01 2022, @10:53AM (#1249401) Journal

      The summary clearly says "despite the image having no patient information associated with it." All the AI had was the imagery. It doesn't know anything about the patient. And it could still identify the race - that is what is concerning the people that are conducting this trial. They don't know what the AI can see that humans have not yet seen.

      All your claims about "What can a system know about you from, say, the websites you visit?" are completely irrelevant. No such information was provided to the AI system.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday June 01 2022, @02:31PM (2 children)

        by Anonymous Coward on Wednesday June 01 2022, @02:31PM (#1249438)

        First that means they deliberately trained a neural network to decide race from the scans

        Second since the resulting neural network manages to decide race based on the scans ... that means that apparently there is a some actual difference between the MRI's of different races that the neural network managed to pick up on

        The question now is what difference is it that the neural network picked up,
        and is it some difference in the way the scans where done, or some actual subtle physical difference between the races

        • (Score: 2) by HiThere on Wednesday June 01 2022, @03:24PM

          by HiThere (866) on Wednesday June 01 2022, @03:24PM (#1249451) Journal

          No, though they may have. Or it could be just an associated variable in the training data that tended to cluster somehow when they trained it to detect medical problems.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @07:08PM

          by Anonymous Coward on Thursday June 02 2022, @07:08PM (#1249906)

          This has been reported on before. It wasn't deliberate, expected, or even thought possible, and we still don't know how or why it happens. IIRC, the original study just used chest x-rays while this is a broader analysis.

      • (Score: 3, Insightful) by Anonymous Coward on Wednesday June 01 2022, @09:09PM (1 child)

        by Anonymous Coward on Wednesday June 01 2022, @09:09PM (#1249543)

        Any forensic pathologist can tell race, sex, make a good guess as to age, etc. from looking at bones. Because they are allowed to notice. Pretty safe bet the doctors reading those images can make pretty accurate guesses too, they simply refuse to. Machines don't know when to lie though, we haven't trained them that good yet. Race is a lot deeper than just clicking a color for your avatar in a game.

        Bottom line is they probably should figure out what the AI is noticing, but they wouldn't be allowed to public it so why bother? Just try to keep out biases that would lead to bad results and let the AI take race into account to the extent it can see differences in both the images and results of various treatments. Peoples vary, their rates of various maladies vary, their response to treatments varies. Humans are forbidden from noticing or saying these things, but there is no reason we can't allow the machines to. So long as they aren't taught to discriminate and hate, let them identify the best options for a patient.

        • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @09:37PM

          by Anonymous Coward on Thursday June 02 2022, @09:37PM (#1249963)

          > Because they are allowed to notice.

          The hook nose is a giveaway for the Jews. And an oversize bottom lip, for slurping up water melon, is also a giveaway - for kooks and clowns.

    • (Score: 1, Offtopic) by Opportunist on Wednesday June 01 2022, @11:54AM (6 children)

      by Opportunist (5545) on Wednesday June 01 2022, @11:54AM (#1249411)

      Where you live is probably more interesting to the average hospital than your race. Think they care more about your skin color or the content of your wallet? And where you live probably says more about the latter than the former.

      • (Score: 2) by HiThere on Wednesday June 01 2022, @03:28PM (5 children)

        by HiThere (866) on Wednesday June 01 2022, @03:28PM (#1249452) Journal

        It really depends on what you mean when you say "to the average hospital". If you're talking about official policy, then I'm certain you're correct. But the policies are implemented by individual people, who have their own goals, motives, and purposes. There have been a very large number of reports of quite wealthy black patients who received drastically substandard care. I'm sure that wasn't the official policy, but it happened anyway, because of "judgement calls" made by individual care-givers.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 2) by Opportunist on Wednesday June 01 2022, @06:13PM (2 children)

          by Opportunist (5545) on Wednesday June 01 2022, @06:13PM (#1249496)

          If you're working for me, you better follow my policy. Otherwise you're free to seek other employment. If you want to be racist, be it on your own time and money.

          • (Score: 2) by HiThere on Thursday June 02 2022, @12:27AM (1 child)

            by HiThere (866) on Thursday June 02 2022, @12:27AM (#1249570) Journal

            "If you're working for me, you better follow my policy."
            That can work reasonably well in a small business. But that requirement doesn't scale well, unless you're willing to pay for of LOT of supervision, and supervision of the supervisors and...

            FWIW, my wife was on the opposite end of that. She was an entertainer, and liked to cut out stand-up paper animals and give them to people. She got a lot better care in the hospitals than did most people, except when she was really sick, and unable to interact well.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @07:12PM

              by Anonymous Coward on Thursday June 02 2022, @07:12PM (#1249909)

              It also works the other way as well. There have been many, many instances of racism being 'unofficially' enforced by upper management. I use scare quotes because it only becomes *un*official when they might get in trouble for it.

        • (Score: -1, Troll) by Anonymous Coward on Wednesday June 01 2022, @06:38PM (1 child)

          by Anonymous Coward on Wednesday June 01 2022, @06:38PM (#1249502)

          Yeah, right. Niggers whine about everything and treat everyone like shit while they do it. Just ask any restaurant worker. No non-black person should have to provide any service to a Nigger in the first place. They can go the fuck home to Dindustan and let Dindu doctors work on them.

          • (Score: 2) by Opportunist on Friday June 03 2022, @08:51AM

            by Opportunist (5545) on Friday June 03 2022, @08:51AM (#1250157)

            Really? In my experience, the average Karen is a white, middle-class soccer mom with no pastime aside of making the life of actually working people miserable.

  • (Score: 5, Insightful) by Runaway1956 on Wednesday June 01 2022, @10:49AM (9 children)

    by Runaway1956 (2926) Subscriber Badge on Wednesday June 01 2022, @10:49AM (#1249400) Homepage Journal

    It could be good that the AI can see gender, age, race, and possibly more. Whether it be a human doctor, or an AI,the knowledge that one group of people are prone to a condition, but no other group of people, can be useful. I mean, nobody in their right mind immediately orders a test for sickle cell anemia if the patient is white. However, if that apparently white patient has a black heritage, then the test might be appropriate. Sounds like the AI might outsmart the doctor in a case like that, because it identifies black traits that the doctor is missing?

    If you routinely hide data from your health care provider, such as your racial background, you are creating your own disparity in outcome. Would you also lie about the history of diabetes, or heart disease, or kidney disease in your family history?

    --
    Abortion is the number one killed of children in the United States.
    • (Score: 2) by inertnet on Wednesday June 01 2022, @01:42PM

      by inertnet (4071) Subscriber Badge on Wednesday June 01 2022, @01:42PM (#1249429) Journal

      That's what I was thinking. This should result in better health care overall, providing optimal diagnostics for everyone.

    • (Score: 5, Insightful) by Immerman on Wednesday June 01 2022, @01:58PM (7 children)

      by Immerman (3985) on Wednesday June 01 2022, @01:58PM (#1249433)

      In a perfect world it certainly could be - as you point out there are many diseases that legitimately have significant racial components.

      However, back here in the real world the systems are being trained to recommend course treatments based on the historical course treatments recommended by doctors. And those recommendations are recognized to have a substantial and unjustified racial bias.

      Train a system that can recognize race, on a data set that contains an unjustified racial bias, and you can reasonably expect the system to perpetuate that bias.

      • (Score: 2) by maxwell demon on Wednesday June 01 2022, @03:09PM (2 children)

        by maxwell demon (1608) Subscriber Badge on Wednesday June 01 2022, @03:09PM (#1249446) Journal

        However, back here in the real world the systems are being trained to recommend course treatments based on the historical course treatments recommended by doctors.

        Then maybe they should concentrate on changing that instead?

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by Immerman on Wednesday June 01 2022, @10:13PM (1 child)

          by Immerman (3985) on Wednesday June 01 2022, @10:13PM (#1249559)

          They are trying (at least some of "they").

          One of the techniques they were pursuing was training AI to evaluate patients without giving it any racial data to bias it... Oops, looks like that's not actually possible.

          That pretty much just leaves eliminating racism in doctors - I'm sure we'll be just as successful there as we have in the rest of society.

          • (Score: 2) by maxwell demon on Thursday June 02 2022, @04:46AM

            by maxwell demon (1608) Subscriber Badge on Thursday June 02 2022, @04:46AM (#1249621) Journal

            One of the techniques they were pursuing was training AI to evaluate patients without giving it any racial data to bias it

            That's not what I meant. I should have been more clear; let me repeat my previous quote again with emphasizing the part they should change:

            However, back here in the real world the systems are being trained to recommend course treatments based on the historical course treatments recommended by doctors.

            As long as you base your training on what doctors have recommended, instead of what worked, you'll inevitably train any biases of the doctors in, not just racial ones. The training has to be on objective data, not on the past subjective interpretation of that data by doctors.

            --
            The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by choose another one on Wednesday June 01 2022, @04:10PM

        by choose another one (515) on Wednesday June 01 2022, @04:10PM (#1249463)

        However, back here in the real world the systems are being trained to recommend course treatments based on the historical course treatments recommended by doctors.

        That would appear to go against everything in evidence-based medicine and be a reversion back to opinion-based treatment, no?

        What they clearly should be doing is recommending course treatments based on the historical _outcomes_ measured in trials.

        If that is the flaw, however, it is not a training problem is it?

      • (Score: 1) by khallow on Wednesday June 01 2022, @05:40PM (2 children)

        by khallow (3766) Subscriber Badge on Wednesday June 01 2022, @05:40PM (#1249487) Journal
        The problem here is that we don't understand how that distinction is being made. Even if this effect is filtered out, an even more subtle effect might get through our detection mechanisms. The ultimate flaw here is the biased training set.

        I suspect we'd have much better luck unbiasing the data as much as we can (keeping in mind that there is some medical relevance to ethnicity) than in trying to create an unbiased learning system from biased data directly.
        • (Score: 2) by Immerman on Wednesday June 01 2022, @10:33PM (1 child)

          by Immerman (3985) on Wednesday June 01 2022, @10:33PM (#1249562)

          Exactly.

          How exactly are you supposed to get that unbiased data set, when the real word data sources (doctor evaluations) are biased?

          I suppose you could have doctors re-evaluate the (race agnostic to human eyes) data to recommend treatments - but that's going to get *really* expensive, and there would be zero follow-up data available to assess the purely hypothetical treatments.

          • (Score: 1) by khallow on Thursday June 02 2022, @11:31AM

            by khallow (3766) Subscriber Badge on Thursday June 02 2022, @11:31AM (#1249698) Journal

            How exactly are you supposed to get that unbiased data set, when the real word data sources (doctor evaluations) are biased?

            I don't have some great scheme in mind, but it might be possible to filter out some of that bias using the learning system itself. Suppose we treat medical treatment as a path (or more accurately a decision tree along which a path would be chosen). The training set is a description of a bunch of paths and outcomes that the learning system is consolidating into said decision tree. Bias would be present when a move along the training set path is category-dependent (race being only one possible category of what could cause such bias) and results in an outcome that doesn't help the patient.

            So the learning system would need to being able to discern those categories and health outcomes in this case. But once it can, I don't see why it couldn't not only detect and report such biases, but also filter them out from the decision making tree to better optimize treatment strategy.

            An advantage is that it could find, report, and filter biases other than just race such as the common overuse of antibiotics.

  • (Score: -1, Flamebait) by Anonymous Coward on Wednesday June 01 2022, @12:59PM (2 children)

    by Anonymous Coward on Wednesday June 01 2022, @12:59PM (#1249419)

    First thing that came to mind reading this.

    How long before the news shows evidence of AI used on population to discriminate undesirables like the Uyghurs or Aryans' version of Niggers?

    • (Score: -1, Troll) by Anonymous Coward on Wednesday June 01 2022, @01:16PM (1 child)

      by Anonymous Coward on Wednesday June 01 2022, @01:16PM (#1249422)

      Never mind...looks like it has already been proven to be a serious problem long ago.
      https://www.weforum.org/agenda/2021/07/ai-machine-learning-bias-discrimination/ [weforum.org]

      Now we have to worry about Ageism?
      Crap, eventually even being White won't save you.
      Looks like the future really will be as portrayed in the movie, Cloud Atlas.

      • (Score: 0) by Anonymous Coward on Wednesday June 01 2022, @04:51PM

        by Anonymous Coward on Wednesday June 01 2022, @04:51PM (#1249474)

        We figured that's where you were going with the totally-not-racist trolling.

  • (Score: -1, Redundant) by Anonymous Coward on Wednesday June 01 2022, @03:17PM (3 children)

    by Anonymous Coward on Wednesday June 01 2022, @03:17PM (#1249449)

    I guess your eyes looking at skin color is racist... so you have to have a computer determine your race.

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday June 01 2022, @03:59PM (2 children)

      by Anonymous Coward on Wednesday June 01 2022, @03:59PM (#1249460)

      Every human being has a collection of traits by means of which they can be classified into types. The fact that AI can be trained to perform this classification is not particularly surprising and not terribly interesting.

      That these classifications correlate to health conditions and might prove useful in treating patients is again unsurprising and potentially valuable.

      Taking those classifications and concluding that certain ones do not deserve a seat at the table, or that their lives matter somewhat less is the essence of racism, and is a social construct.

      • (Score: 1, Touché) by Anonymous Coward on Wednesday June 01 2022, @06:48PM (1 child)

        by Anonymous Coward on Wednesday June 01 2022, @06:48PM (#1249505)

        "Taking those classifications and concluding that certain ones do not deserve a seat at the table, or that their lives matter somewhat less is the essence of racism, and is a social construct."

        The question is who's table are we talking about, because non-whites demand a seat at Whitey's table. Access to Whites and the civilizations we build is not a human right. We don't owe you begging stray dogs shit.

        • (Score: 2, Touché) by Anonymous Coward on Wednesday June 01 2022, @07:04PM

          by Anonymous Coward on Wednesday June 01 2022, @07:04PM (#1249511)

          Q. E. D.

  • (Score: 2) by Rich on Wednesday June 01 2022, @04:00PM

    by Rich (945) on Wednesday June 01 2022, @04:00PM (#1249461) Journal

    Health insurance is a gamble with a lot of money in it. I wonder when we hear about the first cases of questionable large scale data use for directing health insurance sales towards healthier people and how that will look? Straight abuse of proper medical data? Mining for genealogy data? Associating by area and name? AI results from data of seemingly little value (like in the article)? "Black" side channels for information and rewards for the sales agents?

    The point is that if one player puts this into use, they will gain a an advantage good enough for capitalism to do the rest and force out the remaining players. So the others have to play, too. And with building floors full of mathematicians and planners, they all know.

  • (Score: -1, Troll) by Anonymous Coward on Wednesday June 01 2022, @06:53PM

    by Anonymous Coward on Wednesday June 01 2022, @06:53PM (#1249508)

    that the Marxist Jews and their Shabbos Goy say race is just a social construct.

  • (Score: 1, Insightful) by Anonymous Coward on Wednesday June 01 2022, @07:31PM (6 children)

    by Anonymous Coward on Wednesday June 01 2022, @07:31PM (#1249520)

    Surprised no one brought up the fault of trying to fit people into categories that don't exist.

    "Race" is essentially attempting to categorize people based on outward appearance. The world is filled with examples that defy such attempts.

    Take Condoleezza Rice. She has genetic markers with about half of her recent ancestors being from Africa, 40% from Europe and 9% from East Asia.

    My grandson has recent ancestors (within four or fewer generations) from five continents.

    Their appearance reveals nothing about their haplogroup makeup.

    Africa itself has people with six times the genetic diversity of the rest of the world. Identifying someone as African provides little information of medical use. Same for much of the rest of the world.

    There are hundreds of millions of counter examples to the concept of "race". The fact it persists is due to the limited intelligence of humans and its need for group identification at any cost.

    • (Score: 1, Touché) by Anonymous Coward on Wednesday June 01 2022, @08:04PM (2 children)

      by Anonymous Coward on Wednesday June 01 2022, @08:04PM (#1249525)

      "Race" is essentially attempting to categorize people based on outward appearance.

      Doesn't the fine summary already disprove your assertion? That an AI was able to determine race based on the internal appearance of the subject?

      • (Score: 0) by Anonymous Coward on Wednesday June 01 2022, @08:28PM (1 child)

        by Anonymous Coward on Wednesday June 01 2022, @08:28PM (#1249532)

        Still trying to match to nonexistent categories.

        There are variations in human physiology. That is obvious. The state space is simply much larger and more sophisticated than the less than half a dozen "races".

        They are simply teaching AI to act like the typical human on the street.

        As a theoretical physicist, I'd be drummed out of the profession if I proposed a particle whose characteristics had billions of counter examples.

        • (Score: 1) by khallow on Thursday June 02 2022, @11:43AM

          by khallow (3766) Subscriber Badge on Thursday June 02 2022, @11:43AM (#1249701) Journal

          The state space is simply much larger and more sophisticated than the less than half a dozen "races".

          Irrelevant. For example, US and EU businesses can be subject to significant fines and penalties, if they're found to discriminate on the basis of certain categories - race is a common one. It doesn't matter to the legal system that there's plenty of other, much larger and more sophisticated ways that the business could have discriminated (some even legal), doing so on the basis of race is good enough to trigger those fines.

          And that brings up an important point. Much talk has been given here to trying to deliberately discriminate on the basis of race. But there are other problems - such as accidental discrimination. A system that is discriminating on the basis of race (or other protected categories) in ways that harm health outcomes can open up the business to these penalties above.

    • (Score: 4, Insightful) by Mykl on Thursday June 02 2022, @01:19AM

      by Mykl (1112) on Thursday June 02 2022, @01:19AM (#1249579)

      Oh geez, not this again.

      Yes, there are millions of people on the planet who don't fit neatly into a category of a particular 'race'. But there are billions of people who do. And we know that certain races have similar characteristics (someone earlier mentioned sickle-cell anemia being more prevalent in some races than others. Personally, I am more likely to sunburn than the average human). Being able to categorize people, from a medical perspective, is very useful in providing better health outcomes.

      Historically (and even now) we may see biases against some races in terms of prioritizing treatment. That's a separate issue to being able to use AI to determine the characteristics most likely to affect someone based on their heritage and to recommend outcomes that optimise their health.

      To put the "there is no race because of exceptions" argument into perspective, let's look at the color spectrum. Most would agree that #0000FF is blue, and that #00FF00 is not blue. Taking it further, any color value where the 5th and 6th digits are notably higher than the 1st to 4th digits is going to look blue to most people. #005566 is much more subjective - some might say blue, others not. But the fact that #005566 exists does not mean that there is no such thing as blue! There are many color combos where the R, G and B values are approximately equal - we'd generally agree that they are shades of grey. Or would you prefer to say "There's no such thing as color"?

    • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @06:36PM

      by Anonymous Coward on Thursday June 02 2022, @06:36PM (#1249883)

      Race is way more significant that outer appearances, that evidently, can't be attributed to a continent by retarded, brainwashed slaves like you.

    • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @09:43PM

      by Anonymous Coward on Thursday June 02 2022, @09:43PM (#1249967)

      Rather than race, I find it easier to categorize people by smell. Indians for example are a distinct subgroup from the cheese-eating surrender monkey French. But let's not talk about the British.

  • (Score: 0) by Anonymous Coward on Thursday June 02 2022, @06:12PM

    by Anonymous Coward on Thursday June 02 2022, @06:12PM (#1249865)

    SHUT
    IT
    DOWN

(1)