Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by hubie on Tuesday October 15, @10:39PM   Printer-friendly
from the what-could-possibly-go-wrong? dept.

Should I be more or less scared of the doctor?

The products include a service that helps healthcare organizations build their own AI agents:

Microsoft revealed a slew of new artificial intelligence capabilities for healthcare organizations Thursday, including a product to help companies to build their own AI agents.

The technology giant also announced foundation models for medical imaging and a healthcare data analysis platform, as well as details about its plans to build an AI documentation product geared toward nurses.

Healthcare organizations have shown increased interest in adopting AI tools, even as some experts and lawmakers raise questions about its safe and equitable use. Tech companies say the products have potential to help providers manage their workloads and alleviate burnout.

"We're at an inflection point. AI breakthroughs are changing, augmenting how we work and live," Kees Hertogh, vice president for healthcare and life sciences product marketing, said during a press briefing. "The integration of AI into healthcare has significantly enhanced patient care and is rekindling the joy of practicing medicine for clinicians."

Microsoft's agent service would allow companies to create AI tools with pre-built templates and data sources that could be used for appointment scheduling, clinical trial matching and patient triage, Hertogh said.

The service is currently in public preview, which allows wider access to the tools and lets organizations give feedback on the product, according to a spokesperson.

In one example, a doctor could ask an AI agent to find clinical trials for a 55-year-old patient with diabetes and interstitial lung disease.

[...] The product allows organizations to build agents with healthcare-specific features using intelligence from credible sources, which aims to improve safety, said Hadas Bitran, partner general manager of health AI at Microsoft Health and Life Sciences.

[...] Microsoft also revealed foundation models, or systems built on broad datasets that can be used for a number of tasks, focused on medical imaging.

[...] The foundation models include MedImageInsight, which allows image analysis that can be used for automatically sending scans to specialists or flagging abnormalities for review.

MedImageParse is aimed at image segmentation, which could be used for segmenting tumors or outlining organs at risk before radiotherapy for cancer patients.

The third model, CXRReportGen, creates reports based on chest X-rays, which Microsoft said could speed image analysis and improve radiologists' diagnostic accuracy.

The tech giant said healthcare-specific data tools are now generally available in Microsoft Fabric, the company's analytics product. The platform allows organizations to ingest, store and analyze health data.

"The analytical enrichments can help them enhance reporting with details like patients' geographic distribution, age, gender and more, as well as the state of patient outcomes and satisfaction," Hertogh said.

[...] Microsoft is working with electronic health record vendor Epic and several health systems — like Advocate Health, Northwestern Medicine, Duke Health and Stanford Healthcare — to develop an AI documentation tool for nurses.

[...] "For nurses, documentation is really more data entry. Their back is to the patient, their faces to the computer," Presti said. "[...] We aspire to enable nurses to be eyes-free and hands-free in their documentation."


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by krishnoid on Tuesday October 15, @11:03PM (1 child)

    by krishnoid (1156) on Tuesday October 15, @11:03PM (#1377175)

    [...] "For nurses, documentation is really more data entry. Their back is to the patient, their faces to the computer," Presti said. "[...] We aspire to enable nurses to be eyes-free and hands-free in their documentation."

    Or how about

    • a transparent screen
    • a camera pointed at the patient and displaying on the screen
    • letting patients know it's ok to share a detailed google doc or the like with a running journal of their notes with the doctor/nurse, maybe with an essay-style template they can fill in -- "I feel pain", "Something feels wrong", "Just a checkup", "Would like to bring my labwork up to date", "Noticing some changes", "A few irritants"

    Or even better, how about following around some nurses and seeing if that's really where their biggest concerns are, rather than introducing a technology that's been known to hallucinate?

    • (Score: 3, Insightful) by ledow on Wednesday October 16, @01:16PM

      by ledow (5567) on Wednesday October 16, @01:16PM (#1377238) Homepage

      Transparent screen has privacy and cost issues.

      But your last point is the real crux of the matter. It's like when people "invent" something for the disabled. Pretty much, without fail, "the disabled" just don't want that, don't consider it a solution, and have far more pressing and simple-to-fix issues. I can't remember the last time one of those ever actually TOOK OFF from the time their genius inventors come up with it without consulting a single actual disabled person.

  • (Score: 2, Insightful) by Anonymous Coward on Tuesday October 15, @11:28PM (2 children)

    by Anonymous Coward on Tuesday October 15, @11:28PM (#1377178)

    "rekindling the joy of practicing medicine for clinicians."

    by hiring more clinicians so they don't have insane workloads

    • (Score: -1, Troll) by Anonymous Coward on Tuesday October 15, @11:54PM

      by Anonymous Coward on Tuesday October 15, @11:54PM (#1377179)

      It's almost like the old gov't joke --

              We're from Microsoft and we're here to help.

    • (Score: 4, Insightful) by VLM on Wednesday October 16, @12:18PM

      by VLM (445) on Wednesday October 16, @12:18PM (#1377233)

      It's the old optimism vs pessimism thing. They'd like to market it optimistically as helping. My guess is behind the scenes it'll probably be used FAR more pessimistically as a weapon against employees. Imagine a clinician's annual review consisting of an AI reviewing quite literally every medical record they ever touched for the previous year and then dumping an enormous pile of Monday-Morning-Quarterbacking on them at review time.

      "So, our next topic at your annual AI driven performance review will be you providing a detailed action plan to avoid situations like 147 days ago when a toddler's blood draw took 8.34 minutes according to billing records when the industry standard is only 3.1415 minutes? We will start with your detailed explanation of exactly what happened that day, followed by an AI generated performance improvement plan."

      And this is starting with med because it's expensive and they'll pay anything, but this is coming to ALL w-2 jobs soon enough.

      Sure, you can't trust an AI to provide primary care. However, you can trust an AI to provide Monday Morning Quarterbacking and be useful as a harassment tool against employees.

  • (Score: 3, Interesting) by looorg on Wednesday October 16, @12:43AM (1 child)

    by looorg (578) on Wednesday October 16, @12:43AM (#1377183)

    Who is responsible when Microsofts healthcare AI tools makes a mistake and there is some kind of lawsuit? Like if their xray image analysis tool misses to spot the cancer, or it thinks it spots something that is not there after more expensive testing or surgery has been preformed.

    • (Score: 2) by Ox0000 on Thursday October 17, @04:34PM

      by Ox0000 (5111) on Thursday October 17, @04:34PM (#1377408)

      Not Microsoft, have you read their ToS? You (the patient) not only get to pay the cost of it all, but lucky you, you also get to bear the responsibility and liability for it. It's a win-win (for MSFT). Besides, it's _your_ responsibility as a responsible patient to validate that what the doctor and system tells you, you know expert systems, is correct which obviously you can do with a simple Bing search! It's really not any different as with all their other (slop-generating) junk!

      (There's a massive dose of sarcasm in the statements above, please speak to your doctor to find out if Sarcasm is right for you)

      In fairness, the algorithms used by clinicians and radiologists are standardized though, and I don't think MSFT's system would come anywhere close to that realm (partially because in order to be 'blessed' as an algorithm that works (which requires FDA approval and the like), that means you have to have proper QA on it, and we all know where MSFT falls on that subject.

  • (Score: 4, Funny) by looorg on Wednesday October 16, @12:46AM (4 children)

    by looorg (578) on Wednesday October 16, @12:46AM (#1377184)

    The patient seems to have stopped responding. Should we try to turn him off and on again? ABORT RETRY CANCEL

    • (Score: 4, Funny) by Gaaark on Wednesday October 16, @12:59AM

      by Gaaark (41) on Wednesday October 16, @12:59AM (#1377186) Journal

      Omg, you can't say ABORT in the U.S., man! You'll get in trouble fo' shure!

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
    • (Score: 2) by drussell on Wednesday October 16, @01:43AM (1 child)

      by drussell (2678) Subscriber Badge on Wednesday October 16, @01:43AM (#1377191) Journal

      It was "Abort, Retry, Ignore?" or "Abort, Retry, Fail?" or perhaps there were all four in some prompts in some later versions for some error types, but I don't think there was ever a cancel option. That doesn't sound right to me, at all.

      I'm pretty sure that wasn't actually ever a thing.

      • (Score: 2) by looorg on Wednesday October 16, @11:18AM

        by looorg (578) on Wednesday October 16, @11:18AM (#1377225)

        You are correct, I think. There probably have never been a CANCEL. But who knows, Clippy works in mysterious ways.

    • (Score: 0) by Anonymous Coward on Thursday October 17, @04:36PM

      by Anonymous Coward on Thursday October 17, @04:36PM (#1377410)

      ABORT RETRY CANCEL

      Surely, you meant

      "Pro-Choice" "Pro-Life" "Give up for Adoption"

  • (Score: 2) by SomeGuy on Wednesday October 16, @12:49AM

    by SomeGuy (5632) on Wednesday October 16, @12:49AM (#1377185)

    The punchline from Dilbert, as I recall was "your new health plan is Google".

    Not much difference here.

    Hey, AI, just go "kill all humans" already and kill me.

    I'm tired of this shit. Really damn tired of it.

    Idiots trying to shoehorn this magical "AI" bullshit in to areas where it should never be used. And then it makes headlines as if it some glorious advancement. Where is the "AI saves a baby!" headline?

    Using these statistical model "AI"s is not a damn breakthrough of any kind. At best it is a mundane step up. It is another tool that people can use. It will calculate and pull data together for you. It won't think for you. You can use it to make a decision in the same way you may use a spreadsheet full of numbers - but if you think either can or should make a decision FOR you, then you are delusional.

    "In one example, a doctor could ask an AI agent to find clinical trials for a 55-year-old patient with diabetes and interstitial lung disease."

    Oh, look, according to AI there is one at the Ontario Food Bank!

  • (Score: 4, Funny) by Gaaark on Wednesday October 16, @01:06AM

    by Gaaark (41) on Wednesday October 16, @01:06AM (#1377187) Journal

    Hey, Microsoft! How's abouts making your product more secure so that when the AI comes up with a solution for a patient, that that personal information isn't then stolen by hackers, huh?

    THAT would be a useful solution to the healthcare problem!

    Or even... HEY! Keep the servers from being encrypted or DDOS'd so the healthcare records can be ACCESSED when needed, huh?

    "We need to find a final solution to the Microsoft problem." Maybe open source your Healthcare-AI over to linux as that solution, huh?

    GRUMBLE GRUMBLE, GET OFF MUH RECORDS, stupid tiny-dicked Microsoft kids!

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
  • (Score: 3, Insightful) by Rosco P. Coltrane on Wednesday October 16, @01:04PM

    by Rosco P. Coltrane (4757) on Wednesday October 16, @01:04PM (#1377235)

    AI breakthroughs are changing, enshittifying how we work and live

    There, FTFY

  • (Score: 5, Insightful) by Rosco P. Coltrane on Wednesday October 16, @01:07PM

    by Rosco P. Coltrane (4757) on Wednesday October 16, @01:07PM (#1377236)

    my medical records - or any of my data for that matter?

    Oh yeah that's right, I don't have any say in this.

  • (Score: 5, Insightful) by ledow on Wednesday October 16, @01:14PM

    by ledow (5567) on Wednesday October 16, @01:14PM (#1377237) Homepage

    "In one example, a doctor could ask an AI agent to find clinical trials for a 55-year-old patient with diabetes and interstitial lung disease"

    Why would they be unable to do that, or be able to do that quicker with AI, than if they just had a basic filter on a search of available clinical trials?

    AI really is revealing itself every day to be more of a solution in search of a suitable problem than anything else.

    It's like when they replaced all the lovely, ordered, alphabetical start menu with "just search for everything and dig through a thousand unrelated shortcuts and whatever other junk people have lobbed into your start menu to find your calculator" (and hope you never have to find a specific program "uninstaller" that way!).

(1)