Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday April 19 2019, @06:34PM   Printer-friendly
from the significant-change dept.

In science, the success of an experiment is often determined by a measure called "statistical significance." A result is considered to be "significant" if the difference observed in the experiment between groups (of people, plants, animals and so on) would be very unlikely if no difference actually exists. The common cutoff for "very unlikely" is that you'd see a difference as big or bigger only 5 percent of the time if it wasn't really there — a cutoff that might seem, at first blush, very strict.

It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure. Achieving an experimental result with statistical significance often determines if a scientist's paper gets published or if further research gets funded. That makes the measure far too important in deciding research priorities, statisticians say, and so it's time to throw it in the trash.

More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature. An accompanying March 20 special issue of the American Statistician makes the manifesto crystal clear in its introduction: "'statistically significant' — don't say it and don't use it."

There is good reason to want to scrap statistical significance. But with so much research now built around the concept, it's unclear how — or with what other measures — the scientific community could replace it. The American Statistician offers a full 43 articles exploring what scientific life might look like without this measure in the mix.

Statistical Significance

Is is time for "P is less than or equal to 0.05" to be abandoned or changed ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by jmorris on Friday April 19 2019, @08:50PM (1 child)

    by jmorris (4844) on Friday April 19 2019, @08:50PM (#832288)

    The problem of overdependence on dodgy use of statistics runs deeper than just using too big of a p-value cutoff.

    Start by reading William Briggs's writings on the subject Classic Posts [wmbriggs.com]. Scroll down to the Probability & Statistics section and read a few at random. If you actually have a mind oriented toward science you will lose a few hours there. It is worth it. It is not required that you even agree with everything there, but most of it is fascinating.

    And if you really want to have have your worldview challenged, go read Thomas Carlyle's Chartism [google.com] for a unpopular take on the basic error behind most use of statistics and charts

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2, Insightful) by pTamok on Friday April 19 2019, @09:18PM

    by pTamok (3042) on Friday April 19 2019, @09:18PM (#832298)

    Part of the issue is people not understanding the tools they are using. The 'throw a dataset at a bunch of analysis programs and see what sticks' approach is used by far too many people.

    Just as many computer programs written by scientists to aid their work turn out to be badly written, so statistical analysis done by people who are experts in their field but have had little or no education in statistics often turns out to be flawed.

    The issue is not so much whether a p-value is less than 0.05, but whether the statistical analysis is correct, relevant, and contextually aware. I am not an expert in statistics. I know my ignorance of this topic is embarrassingly large, but at least I know that I should not opine on areas I am so profoundly ignorant in. Unfortunately, many researchers are not so self-aware.

    Significant results should be reproducible. This seems to be a fairly basic requirement, yet studies of reproducibility of research have found some a worrying lack of reproducibility. e.g. Nature human behaviour: A manifesto for reproducible science [nature.com]