Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Saturday February 04 2017, @09:51AM   Printer-friendly
from the if-at-first-you-don't-succeed dept.

Humanity would understand very little about cancer, and be hard-pressed to find cures, without scientific research. But what if, when teams recreated each other's research, they didn't arrive at the same result?

That's what the Reproducibility Project: Cancer Biology of the Center for Open Science is attempting to do—redo parts of 50 important cancer studies and compare their results. They released their first five replications today, and it turns out that not all of the data is matching up. At least once in every paper, a result reported as statistically significant (the way scientists calculate whether an effect is caused by more than chance alone) was not statistically significant in the replicated study. In two of the cases, the differences between the initial and replicated studies were even more striking, giving the Center for Open Science researchers cause for concern

"I was surprised by the results because of all that homework that we did" to make sure the studies were being reproduced accurately, Tim Errington, Project Manager at the Center for Open Science told Gizmodo. "We thought we were crossing every T and dotting every I... Seeing some of these experimental systems not behave the same was something I was not expecting to happen."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Saturday February 04 2017, @11:20AM

    by Anonymous Coward on Saturday February 04 2017, @11:20AM (#462813)

    not statistically significant

    I wish they would see whether the results were similar or not rather than use statistical significance as the metric for reproducible.

  • (Score: 1) by khallow on Saturday February 04 2017, @02:16PM

    by khallow (3766) Subscriber Badge on Saturday February 04 2017, @02:16PM (#462841) Journal

    I wish they would see whether the results were similar or not rather than use statistical significance as the metric for reproducible.

    They are. The original research used statistical significance so a faithful reproduction of that work also has to use it.

    • (Score: 1) by shrewdsheep on Saturday February 04 2017, @02:28PM

      by shrewdsheep (5215) on Saturday February 04 2017, @02:28PM (#462842)

      The analysis should follow through in this way, sure enough. However, reproducibility should not be defined by comparing the significance of p-values of the studies. If the addition of the second study strengthens the conclusions of the first paper, that should count as reproduction and this is what OP meant, I believe. Also keep in mind the multiple testing problem involved in the whole enterprise.

      • (Score: 1) by khallow on Sunday February 05 2017, @12:44AM

        by khallow (3766) Subscriber Badge on Sunday February 05 2017, @12:44AM (#462978) Journal

        However, reproducibility should not be defined by comparing the significance of p-values of the studies.

        Sorry, those p-values are big parts of the studies. And where's the funding to aggressively push forward on a hundred different studies? Merely replicating the studies (which is what's being done here), is pretty hard on its own.