Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday January 21 2018, @11:38AM   Printer-friendly
from the it-all-adds-up dept.

Researchers developed a new mathematical tool to validate and improve methods used by medical professionals to interpret results from clinical genetic tests. The work was published this month in Genetics in Medicine.

The research was led by Sean Tavtigian, PhD, a cancer researcher at Huntsman Cancer Institute (HCI) and professor of oncological sciences at the University of Utah, in collaboration with genetics experts from around the United States.

Tavtigian utilized Bayes' Theorem, a math equation first published in 1763, as the basis of a computational tool he and the team developed to assess the rigor of the current, widely-used approach to evaluate the results of a clinical genetic test.

Clinical genetic testing is used in a variety of medical fields, including cancer care, obstetrics, and neurosciences, among others. Results of a genetic test may help to provide a definitive medical diagnosis, or assess the likelihood of a person to develop a particular disease before symptoms appear. The range of approaches employed to provide health care based on the results of the test can vary significantly. Patients may be at negligible risk for disease with no medical management required, or they may pursue costly, invasive medical treatment in an effort to stave off disease or manage and minimize symptoms.

With millions and millions of changes possible in genes that control health in any given person, the challenge of discerning which gene changes are likely to cause disease is vast. In the past few years, human genetic researchers have identified thousands of Variants of Uncertain Significance (VUS), that is, genetic changes without a known understanding of how they may impact a person's health. "A large fraction of VUS are believed to be generally harmless," describes Tavtigian. "One only wants to change the medical management of patients when the genetic testing identifies a variant that is likely to be disease-causing. Against a huge population of harmless VUS, how do you identify the small subset that are likely to require medical management?"

Source: https://huntsmancancer.org/newsroom/2018/01/centuries-old-math-equation.php

Sean V Tavtigian, Marc S Greenblatt, Steven M Harrison, Robert L Nussbaum, Snehit A Prabhu, Kenneth M Boucher, Leslie G Biesecker. Modeling the ACMG/AMP variant classification guidelines as a Bayesian classification framework. FENETICS in MEDICINE, 2018; DOI: 10.1038/gim.2017.210

Bayes Theorem


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Sunday January 21 2018, @03:42PM (2 children)

    by Anonymous Coward on Sunday January 21 2018, @03:42PM (#625666)

    I guess you haven't heard about this one yet:
    https://fliptomato.wordpress.com/2007/03/19/medical-researcher-discovers-integration-gets-75-citations/ [wordpress.com]

    For most medical professionals, mathematics is this weird foreign thing completely unrelated to their work (my guess is this is strongly related to the lack of reproducibility of various pharmaceutical/biological research).

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 2) by AthanasiusKircher on Sunday January 21 2018, @06:45PM (1 child)

    by AthanasiusKircher (5291) on Sunday January 21 2018, @06:45PM (#625737) Journal

    For most medical professionals, mathematics is this weird foreign thing completely unrelated to their work (my guess is this is strongly related to the lack of reproducibility of various pharmaceutical/biological research).

    Yes, I hate to jump on the bandwagon of stereotyping here -- there ARE plenty of medical researchers who DO know at least something about basic math and particularly stats -- but it's sometimes quite painful to see the ignorance in some studies. Years ago when I had an infant, I spent several days trying to find and read lots of studies about SIDS in medical journals, because that's often a big concern that parents are warned about with infants and sleep.

    Anyhow, it became pretty obvious to me that some studies were seeing statistical significance or meaning where there was none, often based on confusion of basic probability terms (like odds ratio vs. relative risk, for example). At first I tried to give authors the benefit of the doubt, but then when I saw a study that contained a long footnote where the authors basically "showed the work" in painstaking detail for calculating an odds ratio, I realized I probably wasn't imagining things. That is, something that was essentially a basic arithmetic problem justified having a long footnote showing how it was calculated (not just stating the formula either... the author obviously thought what they were doing was complicated enough to show all the "work" involved in calculating basic arithmetic).

    Again, I have seen plenty of very good studies done by much more competent people, and kudos to them. But over the years I've also seen plenty of these things where it's clear that even incredibly basic mathematical tools are unfamiliar to such researchers. And it's really concerning when the conclusions depend on the interpretation of said tools (as they did in the studies I was reading about SIDS).

    • (Score: 2) by AthanasiusKircher on Sunday January 21 2018, @06:55PM

      by AthanasiusKircher (5291) on Sunday January 21 2018, @06:55PM (#625739) Journal

      Oh, and lest ye think I'm making all this up, there have been numerous studies over the past 40 years or so showing consistent ignorance of Bayesian reasoning among practicing physicians and how that likely leads to all sorts of misinterpretations of test results, screenings, unnecessary follow-up tests, etc. Here's one analysis [slatestarcodex.com] of one of the more recent studies on this phenomenon, but you can find plenty of similar articles over the decades.

      Even worse, as you can read in that link, the doctors who were most likely to get basic probability questions wrong in interpreting frequency for understanding test results were more likely to claim they had good training in stats. In other words, the most ignorant thought they had the best grasp of the material.

      (Note that most of the studies on this have interviewed doctors, not medical researchers. And as I said in my previous post, there are plenty of competent researchers out there with a knowledge of stats. But I wanted to point out that ignorance of Bayesian reasoning and its basic implications is pretty much normal in medical circles... even though applications like the one in TFA are crying out for this sort of analysis.)