Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday April 19 2019, @06:34PM   Printer-friendly
from the significant-change dept.

In science, the success of an experiment is often determined by a measure called "statistical significance." A result is considered to be "significant" if the difference observed in the experiment between groups (of people, plants, animals and so on) would be very unlikely if no difference actually exists. The common cutoff for "very unlikely" is that you'd see a difference as big or bigger only 5 percent of the time if it wasn't really there — a cutoff that might seem, at first blush, very strict.

It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure. Achieving an experimental result with statistical significance often determines if a scientist's paper gets published or if further research gets funded. That makes the measure far too important in deciding research priorities, statisticians say, and so it's time to throw it in the trash.

More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature. An accompanying March 20 special issue of the American Statistician makes the manifesto crystal clear in its introduction: "'statistically significant' — don't say it and don't use it."

There is good reason to want to scrap statistical significance. But with so much research now built around the concept, it's unclear how — or with what other measures — the scientific community could replace it. The American Statistician offers a full 43 articles exploring what scientific life might look like without this measure in the mix.

Statistical Significance

Is is time for "P is less than or equal to 0.05" to be abandoned or changed ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday April 19 2019, @08:02PM (6 children)

    by Anonymous Coward on Friday April 19 2019, @08:02PM (#832266)

    A statistical study, by definition, only provides evidence for correlations, but everyone talking about these studies always assumes that causation is proven. Read an article about any published study, and you will always find causative prescriptions attached - do this, eat that, sleep more, vote democrat, etc. These prescriptions are never justified based on the evidence of correlation found in these studies. Causation research is rare and you always have to read the actual paper to find out whether causation was investigated.

    Really, the best thing to do at this point is to stop reporting any correlation studies altogether. Yes, they are still useful to guide further causation studies, but non-scientists just become confused and take statistics as dogma. Ban it and the world will be a better place.

  • (Score: 0) by Anonymous Coward on Friday April 19 2019, @08:14PM (4 children)

    by Anonymous Coward on Friday April 19 2019, @08:14PM (#832271)

    What is an example of a "causation study"?

    • (Score: 0) by Anonymous Coward on Friday April 19 2019, @08:47PM

      by Anonymous Coward on Friday April 19 2019, @08:47PM (#832285)

      > What is an example of a "causation study"?

      Well, Google thinks that it's a statistical study like this (first hit using your sentence as the search string:

      https://www.fmcsa.dot.gov/safety/research-and-analysis/large-truck-crash-causation-study-ltccs-analysis-series-using-ltccs [dot.gov]

      The Large Truck Crash Causation Study (LTCCS) was undertaken jointly by the Federal Motor Carrier Safety Administration (FMCSA) and the National Highway Traffic Safety Administration (NHTSA). The LTCCS is based on a nationally representative sample of nearly 1,000 injury and fatal crashes involving large trucks that occurred between April 2001 and December 2003. The data collected provide a detailed description of the physical events of each crash, along with an unprecedented amount of information about all the vehicles and drivers, weather and roadway conditions, and trucking companies involved in the crashes.

      But my interpretation of your sentence requires an actual experiment using the classic version of scientific method -- hypothesis and so on.

    • (Score: 5, Insightful) by Thexalon on Friday April 19 2019, @09:13PM (2 children)

      by Thexalon (636) on Friday April 19 2019, @09:13PM (#832296)

      A causation study would be one that demonstrates the process by which A leads to B by doing A to one group while ensuring A doesn't happen to another group and seeing if B happens.

      They're harder to do in a lot of sciences because:
      A. We don't have a few copies of Earth sitting around to use for experiments.
      B. We don't have an easy way of moving stars, planets, and other really large objects around.
      C. Ethics boards are kinda keen on human test subjects surviving the experiment.
      D. It's really hard to isolate some things, because people are complicated.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 0) by Anonymous Coward on Saturday April 20 2019, @04:50AM (1 child)

        by Anonymous Coward on Saturday April 20 2019, @04:50AM (#832450)

        Those are all true for large scale sociology conundrums.

        However, there are areas where correlation is consistently taken as causation when a causation study would be more appropriate. For example, a lot of the studies about the effects of cannabis are suspect for this reason. I want to know what the downsides actually are if I'm say a cancer patient and weighing it against opoids or if I have anxiety and I'm weighing it against an SSRI. I'm not interested in mental illness being correlated, mostly because of the consistent dismissal that causation could be the other way around, i.e. self-medication.

        (And I really would like to know that. I switched from [legal] cannabis to bupropion so I could quit smoking. I did not expect bupropion to actually be effective as an anti-depressant as well, so I'm pleasantly surprised that it also has that effect for me. Now I want to know whether bupropion causes hypertension or if it's merely correlated with hypertension, and I want to know whether cannabis causes mental illness or is merely correlated with it. I can't make an objective decision without causation being established.)

        • (Score: 2) by Thexalon on Monday April 22 2019, @07:34PM

          by Thexalon (636) on Monday April 22 2019, @07:34PM (#833498)

          Medical research usually runs into problems with (D): People are complicated, which makes effects hard to isolate.

          For example, is it the cannabis versus the opioids versus something else, the level of sunlight and thus Vitamin D, the pesticides used on what they had for dinner last Tuesday, etc.

          --
          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 1, Informative) by Anonymous Coward on Friday April 19 2019, @09:02PM

    by Anonymous Coward on Friday April 19 2019, @09:02PM (#832293)

    Oh, because authors of scientific publications need to worry about how non-scientists will misinterpret them? That's not how the world works.