Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by mrpg on Tuesday June 19 2018, @06:22AM   Printer-friendly
from the scary dept.

Google is training machines to predict when a patient will die

A woman with late-stage breast cancer went to a hospital, fluids flooding her lungs. She saw two doctors and got a radiology scan. The hospital's computers read her vital signs and estimated a 9.3% chance she would die during her stay.

Then came Google's turn. An new type of algorithm created by the company read up on the woman — 175,639 data points — and rendered its assessment of her death risk: 19.9%. She died in a matter of days.

The harrowing account of the unidentified woman's death was published by Google in May in research highlighting the healthcare potential of neural networks, a form of artificial intelligence software that is particularly good at using data to automatically learn and improve. Google had created a tool that could forecast a host of patient outcomes, including how long people may stay in hospitals, their odds of readmission and chances they will soon die.

What impressed medical experts most was Google's ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. And it did so far faster and more accurately than existing techniques. Google's system even showed which records led it to conclusions.

Scalable and accurate deep learning with electronic health records (open, DOI: 10.1038/s41746-018-0029-1) (DX)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday June 19 2018, @06:43AM (1 child)

    by Anonymous Coward on Tuesday June 19 2018, @06:43AM (#694861)

    This is, I suppose, not too different in principle from the way insurance companies all over the world calculate the premiums they ask people to pay. A life insurance policy is priced based on how likely it is that the policy holder is going to die within a certain amount of time.

    • (Score: 0) by Anonymous Coward on Tuesday June 19 2018, @08:01AM

      by Anonymous Coward on Tuesday June 19 2018, @08:01AM (#694870)

      Life insurance businesses are really tricky. Actually, I am surprised anyone buys into it, given all the trickytalk.

      Here's one on insurance hawked by a well known TV game show host. [ripoffreport.com]

      Ya gotta read the comments to flesh out what really happened. Apparently one old woman about ready to die tries to cash in and make money off of an insurance policy, and did not read the fine print about limited benefits the first two years - basically they return your premiums. But read on, and other commenters warn of little businesstalks in there about it only covering accidental death.

      Another says not. So I went to the main corporate insurance site to see for myself.... I flat could not see one way or the other whether it just covered death from accident, or from any cause. The site was full of pictures of smiling people, and hope words of assurance, but I could find so little info on what I was really buying. Just routed me to a form where I had to tender my real name, address, and other things I just as soon not put on their database.

      I was left with the distinct impression all they wanted was my signature on paperwork agreeing a monthly deduction from my bank account, with no intention of ever paying anything back, which is the reason why all the businesstalk, and why they flash all that fine print on my TV that I can't read. Do that kind of crap with a business and they will haul me into court and call it "fraud". They can scam via fine print, low contrast print, motormouth-in-the-background, whatever as they see fit - cuz they are a business and I am not. I am a mark. aka. a customer.

      I've had several insurance companies after me... but everytime they open their mouth or print any text, its so full of businesstalk that when I read it, I am left with the distinct impression that it requires a helluva lot of faith that the company will pay my beneficiaries anything. So many outs. Yes, I have taken their stuff over to where I work, and have a big inspection microscope over there. Its really discouraging to see what businesses put in fine print.

      When you see a gray stripe in anything a business gives you, it oughta raise a huge red flag. In my case, it was their keys to the kingdom to get out of damn near anything they claimed in the large print on the sales sheet. Can I trust any business I catch trying to pull fast ones like this on me? Or should I consider them a thief, and keep them under watchful eye until they are out of sight, like a storekeeper watching a shoplifter?

      I find lots of big numbers, but no citation of any obligation they have in the matter.... the only obligations to anyone seem to be those directed at me for timely premium payments and penalties levied onto me for failure of timely remitting.

      They are quite the Jonny-on-the-spot when it comes to accepting premium cheques, knowing all along I won't be anywhere around when what I paid for will be tendered.

      Don't leave my loved ones paying my debts? Ehhh... pay off my friggen debt. I can't believe for an instant someone else is gonna do it. None of those companies in in business to pay off my debts. They are in business to sweet-talk me out of anything they can get out of me! All I am apt to get is a hand-shake with someone dressed in a suit, and another envelope to remit payment to someone else every month.

  • (Score: 3, Insightful) by c0lo on Tuesday June 19 2018, @07:23AM (5 children)

    by c0lo (156) Subscriber Badge on Tuesday June 19 2018, @07:23AM (#694867) Journal

    ...The hospital's computers read her vital signs and estimated a 9.3% chance she would die during her stay.

    Then came Google's turn. An new type of algorithm created by the company read up on the woman — 175,639 data points — and rendered its assessment of her death risk: 19.9%. She died in a matter of days.

    So, both algoes estimated the chances of the woman to leave the hospital alive (that is not die in the hospital) to over 80% - that is, 4 chances out of 5. And that woman actually died.

    I know a single case doesn't make a statistically representative data set, so... why exactly should be impressed:
    - because the woman died with 80% estimated chances to live (so there is a chance both algos are shit, with the non-Google one twice as shitty)? *or*
    - the fact that the PR spin use a single data point to glorify a Google algo?

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 0) by Anonymous Coward on Tuesday June 19 2018, @08:01AM (1 child)

      by Anonymous Coward on Tuesday June 19 2018, @08:01AM (#694871)

      In 2010 there were 35.1 million inpaitent stays in the U.S., with 715,000 deaths (source [cdc.gov]), so the chance of dying in an average visit was 2.25%. Hence both algorithms predicted an elevated chance of death for this patient.

      Think of it as a weather forecast. If the forecast calls for a 20% chance of rain, do you bring an umbrella? Or only when it says 50% or higher?

      • (Score: 2) by takyon on Tuesday June 19 2018, @08:13AM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday June 19 2018, @08:13AM (#694875) Journal

        If you have some simple details about patients, then the predictions may seem a lot less impressive.

        For example, the age of the patient. The older, the more likely they are to die. Has a history of heart failure: simple true/false with no additional details. If true, boost that death percentage. Oxygen saturation. If it is dipping below 90%, that is not good news for the patient. And of course, the condition the patient is being treated for. Some conditions are more dire than others.

        Google does it with 175,639 data points. Can it be done almost as well with just 5 or 10? Well, the study is open access.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by PiMuNu on Tuesday June 19 2018, @10:22AM (2 children)

      by PiMuNu (3823) on Tuesday June 19 2018, @10:22AM (#694901)

      > why exactly should be impressed:

      Because... "AI" ... and Google.

      • (Score: 2) by c0lo on Tuesday June 19 2018, @10:36AM

        by c0lo (156) Subscriber Badge on Tuesday June 19 2018, @10:36AM (#694905) Journal

        ~* shudders *~

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by Gaaark on Tuesday June 19 2018, @11:58AM

        by Gaaark (41) on Tuesday June 19 2018, @11:58AM (#694938) Journal

        "> why exactly should be impressed scared:

        Because... "AI" ... and Google."

        FTFY.
        You welcome (I'm scared too)

        --
        --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 4, Insightful) by KritonK on Tuesday June 19 2018, @07:26AM (1 child)

    by KritonK (465) on Tuesday June 19 2018, @07:26AM (#694868)

    So, we have two algorithms that both predicted that the woman would most likely not die during her stay at the hospital, and we consider Google's algorithm better, because it failed in its prediction by a slightly less wide margin?

    Sounds silly to put our faith in an algorithm, just because its prediction was a few percent more towards the actual outcome, but still wide off the mark. It wouldn't be so silly, however, if that algorithm had predicted a >50% probability of death. Would the patient have been refused treatment, which might have worked, because the algorithm had predicted the patient's death?

    • (Score: 1) by khallow on Tuesday June 19 2018, @01:06PM

      by khallow (3766) Subscriber Badge on Tuesday June 19 2018, @01:06PM (#694961) Journal

      and we consider Google's algorithm better, because it failed in its prediction by a slightly less wide margin?

      Yes. One data point doesn't prove anything in itself, but large numbers of cases where similar shifts in odds occurred (without correspond shifts the other way) would be a good reason to consider the algorithm better.

  • (Score: 2) by aristarchus on Tuesday June 19 2018, @07:43AM (2 children)

    by aristarchus (2645) on Tuesday June 19 2018, @07:43AM (#694869) Journal

    Fooled you again, Google, just as I fooled the Doges of Venice, the Lloyds of London, and the bastard of AIG! Some of us are here just to screw your calcuations, and there are more of us every day. Do you think that is actual data you are getting from Facebook and Cambridge Anuslingus? Oh, how easily the mighty do fall, since they are so stupid! And, "This is why we cannot have White Supremacy". Fuching Actuarials.

    • (Score: 2) by realDonaldTrump on Tuesday June 19 2018, @09:09AM

      by realDonaldTrump (6614) on Tuesday June 19 2018, @09:09AM (#694888) Homepage Journal

      You didn't fool the algorithm. And you wouldn't fool Jimmy the Greek. Although, possibly you've outlived him. I'll tell you, he knew odds. And he got his odds down to a "T." That guy was doing tremendously until he said he was sorry. A credit to his race!!!!

    • (Score: 2) by Gaaark on Tuesday June 19 2018, @12:08PM

      by Gaaark (41) on Tuesday June 19 2018, @12:08PM (#694942) Journal

      You didn't fool them, Ari: they're watching you, oh yes, they're watching.

      OO >>

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 5, Interesting) by tfried on Tuesday June 19 2018, @08:06AM (6 children)

    by tfried (5534) on Tuesday June 19 2018, @08:06AM (#694872)

    To look at this from a positive angle, an accurate(*) prediction of the chance of survival might help make physicians aware of risks they had been overlooking. For instance, if the algo predicts something like a 20% chance of death for a patient that looked like an "easy" cases, then perhaps the patient could be put on closer surveillance, and actually have much better chance of survival than predicted. However, by the same token it is not hard to see at all, how a predicted 98% risk of death would easily turn into a 100% self-fulfilling spell of doom for patient.

    In both cases, however, the prediction of the algorithm can be expected to affect the outcome. So what exactly is the algorithm predicting in the first place, and how should we estimate (quantitatively / morally) the actual value of such predictions?

    What seems a more interesting aspect to me is that the algorithm is supposedly capable of pointing out specific "important" data points that affect the chances of survival. This could assist physicians see the important bits in a jungle of data. Or it could lead to physicians relying (being pushed to rely) on the accuracy of the algorithm, in place of their own reading. But to relate to the reported anecdote: Would the result(s) of the algorithm have led to different treatment that might have saved the woman's life?

    (*): Actually, only 0% or 100% could be accurate predictions for a single case, but let's no be pedantic.

    • (Score: 1) by tfried on Tuesday June 19 2018, @08:16AM

      by tfried (5534) on Tuesday June 19 2018, @08:16AM (#694878)

      (*): Actually, only 0% or 100% could be accurate predictions for a single case, but let's no be pedantic.

      Outside of the quantum cat clinic, that is...

    • (Score: 1) by khallow on Tuesday June 19 2018, @01:35PM (2 children)

      by khallow (3766) Subscriber Badge on Tuesday June 19 2018, @01:35PM (#694973) Journal

      Actually, only 0% or 100% could be accurate predictions for a single case, but let's no be pedantic.

      That would be wrong. If one had predicted in the case mentioned in the story that there would be 100% chance that the patient would survive, then one would have been as wrong as possible in that single case. Thus, one needs to know the actual outcome beforehand in order to be "accurate" in your terms. They don't and thus, can't achieve that "accuracy" ever. And that's before we even consider that such cases probably don't have a predetermined outcome.

      • (Score: 1) by tfried on Tuesday June 19 2018, @02:09PM (1 child)

        by tfried (5534) on Tuesday June 19 2018, @02:09PM (#694987)

        Well, as admitted form the start, this is excessive pedantry, but again, with emphasis added:

        only 0% or 100% could be accurate predictions for a single case

        Or - of course! - they could be wrong. But certainly the woman either comes out dead or alive but not 80.1% alive. Does a 19.9% risk actually mean that the algorithm "predicts" the woman will be dead or alive? Well, neither, of course, and so we cannot even say whether the algorithm was "correct" for any single case.

        And that's before we even consider that such cases probably don't have a predetermined outcome

        Yes, that's the more substantial point.

        • (Score: 1) by khallow on Tuesday June 19 2018, @02:16PM

          by khallow (3766) Subscriber Badge on Tuesday June 19 2018, @02:16PM (#694990) Journal

          Does a 19.9% risk actually mean that the algorithm "predicts" the woman will be dead or alive?

          Yes. 19.9% chance she dies. That's a typical prediction for such things - go through possible outcomes and assign a probability to each outcome.

          Well, neither, of course, and so we cannot even say whether the algorithm was "correct" for any single case.

          Which is quite different from the algorithm being accurate or not.

    • (Score: 2) by LoRdTAW on Tuesday June 19 2018, @03:06PM

      by LoRdTAW (3755) on Tuesday June 19 2018, @03:06PM (#695038) Journal

      However, by the same token it is not hard to see at all, how a predicted 98% risk of death would easily turn into a 100% self-fulfilling spell of doom for patient.

      Doctor: I'm sorry but your husband has a 95% chance of death. We have a few options we can try but your insurance company informed us that your husbands survival chance is too low to cover unless you can afford the $200,000 treatment option.
      Family: Are you fucking kidding! we can't afford that!
      Doctor: Again, I'm sorry. Nurse, will you please transfer Mr. Smith to hospice.

    • (Score: 2) by darkfeline on Tuesday June 19 2018, @06:17PM

      by darkfeline (1030) on Tuesday June 19 2018, @06:17PM (#695184) Homepage

      It's only self-fulfilling as far as health insurance is willing to pay out. From the doctors I've met and heard about, they're professionals; they're going to fight as hard as possible to save the patient even if the 99.99% accuracy AI predicts 99.99% chance of death. It's not so different from mathematicians or programmers; tell them it's impossible and by god they're going to try to prove you wrong.

      However, this could be useful for triage during disasters. When there simply isn't enough resource to go around, it's useful to be able to quickly evaluate "this person is 99% going to die in an hour", "this person is 90% going to die in an hour, but only 5% if they get help now", "this person is 60% going to die in five days without help", "this person is 0.5% going to die".

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 0) by Anonymous Coward on Tuesday June 19 2018, @08:18AM

    by Anonymous Coward on Tuesday June 19 2018, @08:18AM (#694879)

    Kinda reminds me of an episode of Star Trek: Voyager where the doctor was faced with being subjugated to a computer ( the allocator ) which was rationing drugs and care completely by algorithms of probabilities and usefulness to society.

  • (Score: 2) by PiMuNu on Tuesday June 19 2018, @10:25AM

    by PiMuNu (3823) on Tuesday June 19 2018, @10:25AM (#694902)

    Might be interested to know there has been a big scandal in the UK about Google using patient data as a training set without proper consent.

  • (Score: 2) by GreatAuntAnesthesia on Tuesday June 19 2018, @10:56AM (1 child)

    by GreatAuntAnesthesia (3275) on Tuesday June 19 2018, @10:56AM (#694915) Journal

    Alrady been done, in one of the best IT crowd episodes ever: http://theitcrowd.wikia.com/wiki/Return_of_the_Golden_Child [wikia.com]

    (They really missed a trick never doing an IT crowd / Big Bang Theory crossover episode).

    • (Score: 2) by Gaaark on Tuesday June 19 2018, @12:11PM

      by Gaaark (41) on Tuesday June 19 2018, @12:11PM (#694943) Journal

      FAAAAATHAH!

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
  • (Score: 2) by opinionated_science on Tuesday June 19 2018, @01:28PM

    by opinionated_science (4031) on Tuesday June 19 2018, @01:28PM (#694968)

    view.

    An algorithm that predicts when someone may die, can easily modified to remove services *because* someone may die, and factor in cost.

    This may already be done : so don't think there's a *chance* they will not use computers to justify this policy.
     

(1)