Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Friday April 19 2019, @06:34PM   Printer-friendly
from the significant-change dept.

In science, the success of an experiment is often determined by a measure called "statistical significance." A result is considered to be "significant" if the difference observed in the experiment between groups (of people, plants, animals and so on) would be very unlikely if no difference actually exists. The common cutoff for "very unlikely" is that you'd see a difference as big or bigger only 5 percent of the time if it wasn't really there — a cutoff that might seem, at first blush, very strict.

It sounds esoteric, but statistical significance has been used to draw a bright line between experimental success and failure. Achieving an experimental result with statistical significance often determines if a scientist's paper gets published or if further research gets funded. That makes the measure far too important in deciding research priorities, statisticians say, and so it's time to throw it in the trash.

More than 800 statisticians and scientists are calling for an end to judging studies by statistical significance in a March 20 comment published in Nature. An accompanying March 20 special issue of the American Statistician makes the manifesto crystal clear in its introduction: "'statistically significant' — don't say it and don't use it."

There is good reason to want to scrap statistical significance. But with so much research now built around the concept, it's unclear how — or with what other measures — the scientific community could replace it. The American Statistician offers a full 43 articles exploring what scientific life might look like without this measure in the mix.

Statistical Significance

Is is time for "P is less than or equal to 0.05" to be abandoned or changed ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by unhandyandy on Saturday April 20 2019, @02:49AM (1 child)

    by unhandyandy (4405) on Saturday April 20 2019, @02:49AM (#832432)

    Perhaps the problem is that after almost a century the number of experiments today is several orders of magnitude greater than when 0.05 was enshrined as the right p value. So inevitably when say 1000 experiments are performed 20 of them will seem to have "statistical significance" just due to chance.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Saturday April 20 2019, @11:53AM

    by Anonymous Coward on Saturday April 20 2019, @11:53AM (#832513)

    Then you would also expect an increase in "good" studies too. What has happened is only an increase in crappy studies to the point that 50-90% cannot even be replicated. Of the rest, most are probably misinterpreted too.