Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by n1 on Thursday July 10 2014, @01:17AM   Printer-friendly
from the dr.-speltz dept.

The BBC reports that doctors and dentists fail at basic statistics and are unable to answer patients' questions accurately. This leads to unnecessary tests and operations and their associated side-effects. It also stresses patients unnecessarily.

In 2006 and 2007 Gigerenzer gave a series of statistics workshops to more than 1,000 practising gynaecologists, and kicked off every session with the same question:

A 50-year-old woman, no symptoms, participates in routine mammography screening. She tests positive, is alarmed, and wants to know from you whether she has breast cancer for certain or what the chances are. Apart from the screening results, you know nothing else about this woman. How many women who test positive actually have breast cancer? What is the best answer?

  • nine in 10
  • eight in 10
  • one in 10
  • one in 100

Gigerenzer then supplied the assembled doctors with some data about Western women of this age to help them answer his question. (His figures were based on US studies from the 1990s, rounded up or down for simplicity - current stats from Britain's National Health Service are slightly different).

  1. The probability that a woman has breast cancer is 1% ("prevalence")
  2. If a woman has breast cancer, the probability that she tests positive is 90% ("sensitivity")
  3. If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% ("false alarm rate")

In one session, almost half the group of 160 gynaecologists responded that the woman's chance of having cancer was nine in 10. Only 21% said that the figure was one in 10 - which is the correct answer. That's a worse result than if the doctors had been answering at random.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by Dunbal on Thursday July 10 2014, @01:32AM

    by Dunbal (3515) on Thursday July 10 2014, @01:32AM (#66836)

    "In one session, almost half the group of 160 gynaecologists responded"

    There's a long standing joke in the medical community: "If your son is smart, tell him to become an internist. If he has medium intelligence, tell him to become a surgeon. And if he's not that bright, tell him to be a gynaecologist..."

    • (Score: 2) by c0lo on Thursday July 10 2014, @02:02AM

      by c0lo (156) Subscriber Badge on Thursday July 10 2014, @02:02AM (#66846) Journal

      "If your son is smart, tell him to become an internist. If he has medium intelligence, tell him to become a surgeon. And if he's not that bright, tell him to be a gynaecologist..."

      I'm afraid to extrapolate what would happen should the study have focused on skin cancer/melanoma.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 3, Informative) by Geezer on Thursday July 10 2014, @12:31PM

      by Geezer (511) on Thursday July 10 2014, @12:31PM (#67025)

      At least the standard GYN protocol for this points to an oncological consult. What would be really scary is if the consulting oncologist blew it.

    • (Score: 2) by meisterister on Thursday July 10 2014, @09:37PM

      by meisterister (949) on Thursday July 10 2014, @09:37PM (#67309) Journal

      >tell him to become a surgeon

      I see that Surgeon Simulator 2013 accurately portrays the life of a child whose parents overestimated him...

      --
      (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
  • (Score: -1, Offtopic) by Anonymous Coward on Thursday July 10 2014, @01:41AM

    by Anonymous Coward on Thursday July 10 2014, @01:41AM (#66839)

    And I'm vegetarian, and my diet is high in cheese.

    Doctor says CT scan. Hell no I don't want a CT scan.

    The correct answer is eat Monterey Jack cheese instead [wikipedia.org] and the headaches go away.

    Goodbye doctor, you're worthless, and if I never see you again it will be too soon.

    • (Score: 2) by frojack on Thursday July 10 2014, @03:03AM

      by frojack (1554) on Thursday July 10 2014, @03:03AM (#66861) Journal

      Citation Needed.

      Anyone getting medical advice from Wikipedia is skating on thin ice.

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @03:15AM

        by Anonymous Coward on Thursday July 10 2014, @03:15AM (#66865)

        Wikipedia......mentioned! Must.....be....knee-jerk......asshole. Must.........say..........citation-needed!!

        It's Wikipedia [wikipedia.org] you jerk, there is a citation [medicinenet.com].

        Asshole.

        • (Score: 2) by frojack on Thursday July 10 2014, @04:01AM

          by frojack (1554) on Thursday July 10 2014, @04:01AM (#66880) Journal

          Read the link he posted.

          The section he referenced ends with .... that's right, Citation Needed.

          --
          No, you are mistaken. I've always had this sig.
      • (Score: 2) by sjames on Thursday July 10 2014, @03:26AM

        by sjames (2882) on Thursday July 10 2014, @03:26AM (#66870) Journal

        In the Wikipedia article, follow citation 4. Look at the top of page two there. Whoever inserted the citation needed must not have RTFA.

        Wikipedia is a decent place to get an overview IF you read the provided references.

        • (Score: 2) by frojack on Thursday July 10 2014, @04:11AM

          by frojack (1554) on Thursday July 10 2014, @04:11AM (#66883) Journal

          Citation four references tyramine.
          There is NO citation for the claim that pepper jack is Frequently Recommended.

          Even citation 4 is suspect: The writer has master's degree in public health and is a registered dietitian.
          Translation: not qualified.

          --
          No, you are mistaken. I've always had this sig.
          • (Score: 3, Interesting) by sjames on Thursday July 10 2014, @09:30AM

            by sjames (2882) on Thursday July 10 2014, @09:30AM (#66984) Journal

            Actually, an RD is qualified for that. A significant part of their job is to understand food/medication/disease interaction.

            Unlike nutritionists who, in many states, need no credentials at all, RD is a legally protected title like MD.

      • (Score: 1, Informative) by Anonymous Coward on Thursday July 10 2014, @11:55AM

        by Anonymous Coward on Thursday July 10 2014, @11:55AM (#67014)

        Some actual science:

        Interest in tyramine escalated during the early 1960s when it was implicated in a syndrome of intense throbbing headache, sometimes associated with hypertension, that occurred in some patients receiving monoamine oxidase inhibitors after they had eaten foods rich in tyramine (27). Hanington & Harper (28) observed that the foodstuffs implicated in these untoward reactions were similar to those foods often reported by migraine sufferers as precipitants of their attacks, a condition that has come to be known as "dietary migraine." This subpopulation of migraineurs is sensitive to tyramine. A migraine attack was provoked within hours after the ingestion of 125 mg of tyramine in over 80% of instances, whereas neither the nondietary-migraine patients nor the headache-free control subjects reported headaches to any significant extent (29, 30).
        These results were confirmed in part by Bonnet & Lepreux (31), who
        identified 63 tyramine-sensitive migraineurs among a series of 213 cases.
        Ghose and associates (32), using intravenous tyramine, precipitated headache
        attacks in 46% of 31 migraineurs and in none of 27 control subjects.

        from 1981, Annual Review of Medicine [annualreviews.org].

        I have been unable to find any quantitative data about tyramine in cheeses, beyond the claim that it is higher in fermented and aged cheeses. I see much of the qualitative claim that there is not enough tyramine in cheese to trigger headache. Non-fermented cheeses include ricotta, cottage, cream and Neufchatel. Jack cheese is fermented but not aged. My conclusion is that (unsurprisingly) Wikipedia and pop-nutrition has taken a Real Thing and exploded it out of all proportion.

        Probably worth noting that the clinical literature surrounding tyramine is mostly for people on MAOI's, in whom 6+mg may trigger a crisis (substantially less than the 125mg required to trigger headache in sensitive individuals). Some actual tyramine contents: chicken liver aged 9 days (63.84 mg/30 g), sauerkraut (7.75 mg/250 g). Source [nih.gov]

        Not really an extensive literature surrounding this, and if you expect your doc to know it, then you may have watched too much House

    • (Score: 2) by The Archon V2.0 on Thursday July 10 2014, @04:44PM

      by The Archon V2.0 (3887) on Thursday July 10 2014, @04:44PM (#67158)

      > Hell no I don't want a CT scan.

      Really? I know it's ionizing radiation but that comes across like a kid not wanting a shot.

      Hell, I've had about a dozen of the things and am none the worse for wear. They haven't seen a brain on any of them, though.

  • (Score: 3, Informative) by AnythingGoes on Thursday July 10 2014, @01:52AM

    by AnythingGoes (3345) on Thursday July 10 2014, @01:52AM (#66842)
    And mathematicians don't become doctors..
    And if you are bad at math, chemistry and physics are really not your piece of cake, biology is a good science which leads to being a doctor.
    They should just make a computer program that spits out the proper percentages for the (math challenged) doctor :)
    • (Score: 4, Insightful) by naff89 on Thursday July 10 2014, @03:13AM

      by naff89 (198) on Thursday July 10 2014, @03:13AM (#66864)

      I guess the difference is that mathematicians don't regularly speak with authority on medical issues, whereas doctors often speak confidently in matters of statistics.

      • (Score: 2) by kaszz on Thursday July 10 2014, @03:28AM

        by kaszz (4211) on Thursday July 10 2014, @03:28AM (#66871) Journal

        Unfortunately very likely..

    • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @04:56PM

      by Anonymous Coward on Thursday July 10 2014, @04:56PM (#67169)

      Yes, statistics can be hard. But this is not statistics. This is an algebra question I would be able to answer when I was 8 years old.

  • (Score: 5, Interesting) by physicsmajor on Thursday July 10 2014, @02:12AM

    by physicsmajor (1471) on Thursday July 10 2014, @02:12AM (#66849)

    The question stem is quite specific: it requests the positive predictive value (PPV). It is asking, in a population of women who we have no idea about their cancer status, if one tests positive, what's the likelihood she actually has cancer.

    Then, in the provided information, they attempt to obfuscate the actual info as much as possible. You have to combine all three pieces of information in order to actually come out to the correct result, realizing that ~10% of the population will be flagged and, while most affected women will be included in this cohort, only 1% of the population actually has the disease. Ergo, about 1 in 10 is the correct answer.

    Anyone actually practicing medicine can google "ppv mammogram" and get several studies to inform them of this value directly or research it via other methods. The question is poor and baiting; adding to the issue they polled a population which is overworked and going to be breezing through surveys in order to get back to their real work - taking care of patients.

    A much, much better question would have provided an array of incorrect statistics, then several formulae (including both NPV and PPV) using those values. That would be a good way to evaluate doctors' knowledge of assimilating and applying statistics, instead of this biased, baiting question created to grab headlines and fan the flames of medical professional mistrust.

    • (Score: 4, Insightful) by sjames on Thursday July 10 2014, @03:13AM

      by sjames (2882) on Thursday July 10 2014, @03:13AM (#66863) Journal

      The question and the background provided are very much a hot topic in medicine right now.On one side, the conventional wisdom of regular mammograms and the other suggesting they may be doing more harm than good due to high false positives and over aggressive treatment and that they should only be given when there is some other reason to believe there is cancer.

      But in general, the information provided is what doctors are typically provided for the various tests. For that reason, it accurately predicts how the doctors will do with an actual patient in practice.

      This suggests that the information provided about the tests should probably be more helpful than it is.

      • (Score: 2) by frojack on Thursday July 10 2014, @04:19AM

        by frojack (1554) on Thursday July 10 2014, @04:19AM (#66885) Journal

        And with a primary test that has less chance than a dice throw of being correct you can understand why there is confusion.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 2) by maxwell demon on Thursday July 10 2014, @05:49AM

          by maxwell demon (1608) on Thursday July 10 2014, @05:49AM (#66912) Journal

          If you throw dice and get "positive", you've got an 1% chance of having cancer.
          If you do the test and get "positive", you've got a 10% chance of having cancer.

          So the test clearly is much better than throwing dice.

          --
          The Tao of math: The numbers you can count are not the real numbers.
          • (Score: 2) by frojack on Thursday July 10 2014, @06:13AM

            by frojack (1554) on Thursday July 10 2014, @06:13AM (#66923) Journal

            A dice has 6 sides.
            If one side says cancer, on average the dice has a one in 6 chance of correctly predicting cancer compared to the test, which only has a one in 10 chance of correctly predicting cancer.

            --
            No, you are mistaken. I've always had this sig.
            • (Score: 2) by maxwell demon on Thursday July 10 2014, @06:22AM

              by maxwell demon (1608) on Thursday July 10 2014, @06:22AM (#66927) Journal

              That's the probability of getting "positive" (assuming that you assign "positive" to exactly one side), not the probability of actually having breast cancer if getting a positive result.

              Since dice are uncorrelated with breast cancer, the probability of having breast cancer after your dice giving "positive" is exactly the same as the probability of having breast cancer before the dice throw, that is, 1%. While the probability of having breast cancer after the mammography gives "positive" is about 10%.

              Are you a gynaecologist, by chance?

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by frojack on Thursday July 10 2014, @07:22AM

                by frojack (1554) on Thursday July 10 2014, @07:22AM (#66946) Journal

                Dude, it was a joke....
                Don't over analyze Bistro Math.

                --
                No, you are mistaken. I've always had this sig.
            • (Score: 3, Funny) by JeanCroix on Thursday July 10 2014, @05:13PM

              by JeanCroix (573) on Thursday July 10 2014, @05:13PM (#67176)

              A dice has 6 sides.

              So, you're not into RPGs, then...

        • (Score: 3, Interesting) by sjames on Thursday July 10 2014, @09:21AM

          by sjames (2882) on Thursday July 10 2014, @09:21AM (#66983) Journal

          Absolutely. Alas, there's a lot of tests like that and a lot of doctors don't seem to understand screening vs. definitive test.

          I read about a nurse who was fired for a positive SCREENING test for opiates. She hired a lawyer and got a definitive test that showed her level to be consistent with having a poppyseed bagel in the morning.

          One might think a hospital, of all employers, would know better.

          • (Score: 0) by Anonymous Coward on Friday July 11 2014, @02:46AM

            by Anonymous Coward on Friday July 11 2014, @02:46AM (#67402)

            And that is why you avoid any food containing poppy seeds
            because of all the drug testing that is done in the
            USA to get/keep a job! :P (p-_-)p

            It's best to avoid such foods if you are working
            for others where there is a drug testing policy
            in place.

            If you are self employed (and own your own
            business), you don't have to deal with that
            headache! :)

    • (Score: 2, Interesting) by johaquila on Thursday July 10 2014, @10:35AM

      by johaquila (867) on Thursday July 10 2014, @10:35AM (#66997)

      I can see no obfuscation whatsoever in the information. It's just the raw data that is often the only thing people have. And apparently even the numbers used are so realistic that a knowledge of the expected result should have helped them to rule out the high estimates. I can't see how they could have been much clearer without also including the number they were asking for in some way or another.

      This kind of calculation is absolutely fundamental for the medical professions. It is also so basic that, at least in Germany, it is taught in school - usually with medical applications such as this one featuring prominently as examples.

    • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @12:27PM

      by Anonymous Coward on Thursday July 10 2014, @12:27PM (#67023)

      The question stem is quite specific: it requests the positive predictive value (PPV). It is asking, in a population of women who we have no idea about their cancer status, if one tests positive, what's the likelihood she actually has cancer.

      This seems like the most important thing to know about a test, and the information most difficult to find. Here's the thing: you may imagine your doc's spending their spare time pouring through the clinical literature, carefully considering the evidence in support of every new test, but that ain't it. They get marketing literature and marketing literature disguised as scientific studies from device and drug manufacturers. They aren't going to google the "positive predictive value" of an assay, because that's a term for statistics wonks. It's not in their lexicon. They don't care any more about false positive rate than the TSA does. If you've ever watched a doc try to figure out how to reconcile one positive test with a negative alternative, you'll know exactly what I mean.

      The reason they can't do this is (probably) not because they're stupid, or overworked, or lazy. It's because no one ever taught them how to think in probabilities. Medicine is structured on black-and-white thresholds, including the absolute authority of the physician. You either have cancer or you don't. Your PSA is either positive or negative - it's never in the upper 80% of normal and lower 15% of cancerous.

    • (Score: 2) by FatPhil on Thursday July 10 2014, @10:47PM

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday July 10 2014, @10:47PM (#67333) Homepage
      > You have to combine all three pieces of information in order to actually come out to the correct result

      You think it's obfuscation that they have to combine the 3 bits of information to get an answer?!?!?

      So you think it would have been less obfuscated and therefore better if they'd just provided one bit of information - "the answer's 10%, give or take" - instead?
      Or do you think it would have been less obfuscated if they'd have included information that you were supposed to ignore, red herrings?

      Me, I'm not calling that question unfair until Dr Ben Goldacre calls it such.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2, Funny) by Horse With Stripes on Thursday July 10 2014, @02:27AM

    by Horse With Stripes (577) on Thursday July 10 2014, @02:27AM (#66852)

    Dammit Jim, I'm a doctor, not a statistician.

  • (Score: 2) by kaszz on Thursday July 10 2014, @03:38AM

    by kaszz (4211) on Thursday July 10 2014, @03:38AM (#66875) Journal

    I think this issue boils down to that doctors are poor analysts and not much used to complex thinking (and it's hard anyway). Many diagnose on presumption of statistical distribution learned in med school. This will miss many things that doesn't match that presumption. Got headache? - most likely treatment is a headache pill not brain scan. Thus outliers tend to wreck this kind of presumption model.

    Being a prestige job may also make it hard to listen to "lesser people". Being a good doctor doesn't necessarily translate to being a good analyst, mathematician, psychologist, etc. This also applies to other educated professions. Stress doesn't help either. If the health personnel lack time for breaks and must rush all the time. The probability for mistakes skyrockets.

    Oh, and ask how much nutritional schooling your doctors has. It might be a revelation..
    (eating shit makes you feel just like that)

    • (Score: 3, Informative) by q.kontinuum on Thursday July 10 2014, @04:16AM

      by q.kontinuum (532) on Thursday July 10 2014, @04:16AM (#66884) Journal

      Being a prestige job may also make it hard to listen to "lesser people". Being a good doctor doesn't necessarily translate to being a good analyst, mathematician, psychologist, etc. This also applies to other educated professions

      Being a good doctor does mean to be able to inform patients about their health risks, including the ability to either understand statistics or to know whom to ask. If the doctor isn't able to do even one of both, he's not a good doctor. If he makes excuses instead of developing one of both abilities, he's a terrible doctor and a danger to his patients.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by kaszz on Thursday July 10 2014, @04:58AM

        by kaszz (4211) on Thursday July 10 2014, @04:58AM (#66896) Journal

        So what to do when many doctors just "don't get it" when it comes to statistics?

        • (Score: 2) by q.kontinuum on Thursday July 10 2014, @05:23AM

          by q.kontinuum (532) on Thursday July 10 2014, @05:23AM (#66904) Journal

          As I wrote, there are two options. If they don't get it, they can accept advice from others who do. You don't have to be a good mathematician to apply a simple formula, even if you don't fully understand it. If they don't do *that* either, they are terrible doctors, and I would expect they could be sued for damages caused by their misguided diagnosis (e.g. if a breast was removed for such reasons). If I'm able to detect their incompetency and unwillingness to do anything about it, I will avoid them. Besides, there is not much I could do. What I wouldn't do in any case is, to make excuses for them and telling anyone what excellent doctors they were.

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
          • (Score: 2) by maxwell demon on Thursday July 10 2014, @05:59AM

            by maxwell demon (1608) on Thursday July 10 2014, @05:59AM (#66916) Journal

            You don't have to be a good mathematician to apply a simple formula, even if you don't fully understand it.

            Indeed, all you have to understand to apply a formula is (a) when the formula is to be used, (b) which value to insert where, and (c) what the resulting value means. Oh, and of course (d) how to do the actual calculations, but given that the formula only contains basic operations, being able to operate a pocket calculator is sufficient for that.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 3, Funny) by q.kontinuum on Thursday July 10 2014, @06:08AM

              by q.kontinuum (532) on Thursday July 10 2014, @06:08AM (#66920) Journal

              Just thinking: If doctors' problem is, as the thread starter suggested, to listen to "lesser people", it should be possible to slap a nice shiny looking UI onto it, probably throw in some (not actually used) oracle components for good measure, and sell it to them as expert-system for a hefty sum :-) The more they pay the less it hurts their pride, because if this software is soooooo expansive the matter just has to be very complex indeed, and that means there is no shame to admit they didn't understand it themselves ;-)

              --
              Registered IRC nick on chat.soylentnews.org: qkontinuum
          • (Score: 2) by FatPhil on Thursday July 10 2014, @11:06PM

            by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday July 10 2014, @11:06PM (#67338) Homepage
            Agreed, but you don't even need to "apply a simple formula" - you can just eyeball it to 1 significant digit. If you can't recognise immediately from the given facts that about 1/10th of the population will test positive, and only 1/10th of those actually have the cancer, you are a retard. At least by the standards that should be applied to those who are working in a supposedly scientifically-grounded profession. This is high-school maths, nothing more.
            --
            Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
            • (Score: 2) by q.kontinuum on Friday July 11 2014, @05:16AM

              by q.kontinuum (532) on Friday July 11 2014, @05:16AM (#67452) Journal

              Intuitively, I'd agree. But during the last year I spent a considerable amount of time in job interviews, looking for a good candidate (topic: build up and maintain a build- and test system for test automation for multiple platforms, develop a framework to test the different test components. Focus is library/API testing, not UI. Sounds probably more trivial than it is.). Minimum requirement: BSc Computer Science (or comparable). Most candidates actually had master degrees. This time really was an education to me as well.

              One of the entry questions was to implement a short function to check if a number is prime. Programming language could be chosen by candidate from a list containing python, C, C++ and Java. Around 50% of all candidates didn't even know what a prime number is. From those, none was able to implement the requested function even after telling them the definition. From the remaining 50%, approximately 60-80% were not able to implement the function during the interview, most of them failing so miserably that it was obvious they didn't even have a clue about a potential algorithm, which leaves around 10-20% passing this simple test.

              Just to clarify: I always told the candidate that I was not looking for a syntactically flawless implementation. I even would have settled for pseudo-code! And no, we were neither looking on the very cheap side, nor locally restricted, and we did have candidates from literally all over the world (well, not every country, but iirc at least every continent.) And apparently I'm not alone [codinghorror.com] with this experience.

              I definitely learned to lower my expectations for humans in general during that year. And for that reason, I wouldn't hold it against any non-mathematician to not see the obvious here. I now just expect people with responsibility to recognize their own shortcomings and to accept support, when required (which probably still is too optimistic...)

              --
              Registered IRC nick on chat.soylentnews.org: qkontinuum
              • (Score: 2) by FatPhil on Friday July 11 2014, @07:37AM

                by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Friday July 11 2014, @07:37AM (#67484) Homepage
                At my last job I was the on-site instant coding-test evaluator who could tell within 20s whether the technical and business managers should waste time in their interviews, or just get the guy out of the door as quickly as possible. (The tests were scanned and emailed to head office in the UK for an accurate mark, but my evaluation was deemed just as valuable from a practical perspective.) So indeed, I have seen a wide range of skill levels in such a context too. Sometimes the pre-interview bar was set a bit higher (for example when we were recruiting experienced linux kernel programmers - almost all of those did a good job, but not all), but when we were interviewing for web-app developers, we got some scary answers. Ones where they actually try to answer were far more amusing than just blanks.

                The second best candidate I know of scored 19/20 and emerged from the test room after 25 minutes (it was a 1 hour test in theory).

                Modesty forbids saying what score I got from the 20 minutes I spent taking it... ;-)
                --
                Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @01:44PM

      by Anonymous Coward on Thursday July 10 2014, @01:44PM (#67048)

      Especially in the US. This study [rsna.org] claims that the US actually has no standard or evaluation of physician performance. When they retrospectively evaluate performance, US docs fail to meet UK guidelines. Worse - that 10% positive predictive value: among US doctors, it's only 5%. US docs apparently don't know what they're looking for and don't know what it means if they find it. I imagine this results in a lot of unnecessary treatment and a lot of unnecessary expense.

  • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @04:53AM

    by Anonymous Coward on Thursday July 10 2014, @04:53AM (#66894)

    so umm, no one here has actually said how to arrive at the answer. could you please?

    • (Score: 5, Informative) by maxwell demon on Thursday July 10 2014, @05:26AM

      by maxwell demon (1608) on Thursday July 10 2014, @05:26AM (#66905) Journal

      1% of all women have breast cancer. That is, of 10000 women, 100 will have breast cancer.
      90% of those who have breast cancer will test positive. That is, of the 100 women with breast cancer, 90 will test positive.
      9% of those who don't have breast cancer will test positive. That is, of the 9900 women without breast cancer, 891 will test positive.

      So in total you'll have 991 women with positive test, of which only 100 actually have breast cancer. 100/991 is approximately 1/10.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 3, Informative) by Anonymous Coward on Thursday July 10 2014, @03:09PM

        by Anonymous Coward on Thursday July 10 2014, @03:09PM (#67092)

        Actually the math is worse than that:

        P(A) = 0.01 (1 patient in 100 has breast cancer)
        P(NOT A) = 0.99 (99 patients out of a 100 do not have breast cancer)
        P(B|A) = 0.9 (Probability of a positive test, given breast cancer is .9)
        P(B|NOT A) = 0.09 (Probability of a false positive, given no breast cancer is 0.09)

        So what is the probability of A (you have breast cancer), given B (a positive test result)?

        Bayes theorem helps us with this:

        P(A|B) = P(A)*P(B|A) / P(A)*P(B|A)+P(B|NOT A)*P(NOT A)

        Which results in:

        P(A|B) = (0.01*0.9)/((0.01*0.9)+(0.09*0.99))
        P(A|B) = 0.091743119

        That means if a person gets a positive result, there is only a 9.17% chance that the test is correct.

        This number gets worse the more rare the disease is. In the case of 1 in a 1000 people the test is only accurate 0.99% of the time!

        • (Score: 2) by opinionated_science on Thursday July 10 2014, @04:11PM

          by opinionated_science (4031) on Thursday July 10 2014, @04:11PM (#67137)

          thank you. A fellow Bayesian. Why the AC?

        • (Score: 2) by FatPhil on Thursday July 10 2014, @11:16PM

          by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Thursday July 10 2014, @11:16PM (#67342) Homepage
          Anyone doing that calculation with more than one significant digit anywhere is wasting time, effort, and keystrokes.

          On your final point - forget diseases, they're way too common - try scanning for terrorists!
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @04:02PM

        by Anonymous Coward on Thursday July 10 2014, @04:02PM (#67127)

        You were doing well until that last line.

        You have 90 true positives and 891 false positives, so the odds of a positive being true are 90/(90+891) not 100/(100+891), but it is still about 1 in 10.

        Also, 0.1% of people that were tested will be given a false negative, 1 in 1000.

        Detecting rare things is hard, this is why the TSA is worthless.

        • (Score: 2) by AnythingGoes on Thursday July 10 2014, @08:50PM

          by AnythingGoes (3345) on Thursday July 10 2014, @08:50PM (#67274)

          Detecting rare things is hard, this is why the TSA is worthless.

          Try selling that to the general public - who think that there MUST be terrorists everywhere, and that my pet rock keeps away tigers, and we NEED all this "security" to keep us all free.

        • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @08:51PM

          by Anonymous Coward on Thursday July 10 2014, @08:51PM (#67275)

          Detecting rare things is hard, this is why the TSA is worthless.

          It's not worthless. It's an employment program. What would happen if few million people were out of work tomorrow?

          Similarly, why do you think Libya had a revolution? They didn't like Gaddafi? If you look closely, 6 months before the revolution, he fired 500,000 public sector employees to reduce expenditures. They were told to "make private sector jobs and their own opportunities".

    • (Score: 2) by drussell on Thursday July 10 2014, @05:43AM

      by drussell (2678) on Thursday July 10 2014, @05:43AM (#66909) Journal

      https://soylentnews.org/comments.pl?sid=2816&cid=66849 [soylentnews.org] seems to make sense to me...

    • (Score: 3, Insightful) by elf on Thursday July 10 2014, @05:59AM

      by elf (64) on Thursday July 10 2014, @05:59AM (#66917)

      You need to use what is called Bayes Theorem, the three facts give you all the information you need

      PR = Positive Result
      NPR = Not Positive Result
      BC = Has Breast Cancer
      NBC = Not has Breast cancer
      P(A|B) = Probability of A knowing B

      The formula goes

      P(BC|PR) = P(PR|BC)*P(BC) / (P(PR|BC)*P(BC)+P(PR|NBC)*P(NBC)) = fact2*fact1/(fact2*fact1)+(fact3*1-fact1) = 0.9 * 0.01 / ((0.9*0.01)+(0.09*0.99) = 0.009 / 0.009 + 0.0099 = 0.0917 ~0.1

      • (Score: 2) by elf on Thursday July 10 2014, @06:17AM

        by elf (64) on Thursday July 10 2014, @06:17AM (#66925)

        That 0.0099 is wrong! but the answer and the rest is right...bit of an early morning typo there.

        • (Score: 2) by maxwell demon on Thursday July 10 2014, @06:30AM

          by maxwell demon (1608) on Thursday July 10 2014, @06:30AM (#66929) Journal

          You're also missing some parentheses. 0.009 / 0.009 + 0.0099 would be 1.0099 ;-)

          --
          The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @11:52AM

        by Anonymous Coward on Thursday July 10 2014, @11:52AM (#67013)

        So instead of simple counting and reasoning you can put numbers in a somewhat backward and hard-to-memorize formula as well :)

      • (Score: 0) by Anonymous Coward on Thursday July 10 2014, @12:56PM

        by Anonymous Coward on Thursday July 10 2014, @12:56PM (#67034)

        This answer exemplifies why physicians get the answer wrong. No doubt, elf thinks it exactly explains to the above AC how to get 1/10 from the provided data, but it completely fails to inform anyone not already a Bayesian statistician what is going on. There aren't enough words and the words don't clarify.

        It helps much more to explain that positive test results come from people who actually have cancer and "false positives" of people who don't have cancer. In this case, they've said that the false positive rate is 9%. Even ignoring the false negative rate, only 1% of people actually have cancer, then you're going to get 10% positive tests, 9 of which are false and 1 which is accurate. 1-in-10

        • (Score: 2) by elf on Thursday July 10 2014, @01:53PM

          by elf (64) on Thursday July 10 2014, @01:53PM (#67052)

          Firstly, if you want to understand why the answer is what it is you could give the wordy answer (which someone did before me, which was well written) or you could give the mathematical solution, which I did. There are two responses, pick the one you understand best. The first one for you I think.

          Secondly Bayes Theorem is one of the most basic theorems you learn in statistics, you don't need to be a great statistician. If you are interested to learn more, why not try find out more. Try look at the below link, it explains things nicely

          http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/ [betterexplained.com]

          And lastly to become a Doctor you need to have gone to University for quite some time and there to learn some maths (not a major part). This (Bayes Theorem) is something they should all have heard of, but most likely not remembered.

          • (Score: 1, Insightful) by Anonymous Coward on Thursday July 10 2014, @03:10PM

            by Anonymous Coward on Thursday July 10 2014, @03:10PM (#67094)

            And lastly to become a Doctor you need to have gone to University for quite some time and there to learn some maths (not a major part). This (Bayes Theorem) is something they should all have heard of, but most likely not remembered

            In the US, medical schools generally require 1 year of calculus; no statistics. Med school curricula (let's use Harvard [harvard.edu] as an example) tend to be an additional 1-2 years of lecture which contain no explicit math. There may be a little in classes like Clinical Epidemiology and Population Health, but these tend to be taught non-quantitatively. After all, if they were quantitative students, they'd have gone into engineering or science, not medicine.

    • (Score: 1) by irfan on Thursday July 10 2014, @10:27PM

      by irfan (84) on Thursday July 10 2014, @10:27PM (#67327)
      There is also a nice TED video here [ted.com] which explains the concept, and some of the confusion behind it.
  • (Score: 2) by The Archon V2.0 on Thursday July 10 2014, @04:18PM

    by The Archon V2.0 (3887) on Thursday July 10 2014, @04:18PM (#67145)

    Wow, I finally have something over those fancy-title educated types.

  • (Score: 2) by umafuckitt on Thursday July 10 2014, @06:17PM

    by umafuckitt (20) on Thursday July 10 2014, @06:17PM (#67216)

    To be fair to the doctors, these sorts of probabilities are very confusing and humans have a shitty intuition for it.