Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday December 29 2014, @04:26AM   Printer-friendly
from the working-theory dept.

PSMag is running an article that addresses some of the problems discussed here on Soylent News, regarding the problems with the Trustworthiness of published science.

Last January, the two top-ranking officials at the National Institutes of Health wrote that “the checks and balances that once ensured scientific fidelity have been hobbled” by a growing tendency to cut corners. They announced that the NIH is planning “significant interventions” to ensure that we can trust the results that are published.

Several proposals for reproducing results of important science projects are on the table. One from The Science Exchange Network would have scientists submit their studies to a third party for a reproducible study. Submitted experiments are matched with an appropriate, verified Science Exchange lab which would (for a fee) reproduce the experiments.

The problems with this approach are many. Replication of results is desirable, replication of exact experiments is a waste of time and money according to Canada's National Research Council. Often, results can't be reproduced, without exact replication of the experiment. One such case came down to how a solution was stirred at a critical stage.

More often, nobody tries to reproduce results at all. Even when they do try, researchers rarely publish an attempt to replicate someone else’s experiment, and are even less likely to reproduce a failed attempt. Therefore it is unknown what fraction of published studies aren't reproducible.

The article goes on to discuss problems with published results, retractions, and the trustworthiness of published science in general. It is an interesting read without getting bogged down in too much minutia.

Related Stories

Tough Year for Science 20 comments

In their year end review The Scientist is carrying two stories that trumpet the BAD news in science over the last year.

The lists the Top Ten Retractions in 2014, which seems like more than previous years.

The retractions include:

  • STAP stem cell paper retractions from Nature
  • Spiking rabbit blood samples with human blood to make it look like an HIV vaccine was working.
  • A “peer review and citation ring” got 60 articles yanked
  • 120 bogus papers produced by the random text generator

In addition there was A List of the top Science Scandals of the year, some of which are included in the above, but also major containment issues at US government labs, including the discovery of undocumented pathogens in questionable storage.

It wasn't all bad news, a third story listed their nominations for The Years greatest breakthroughs.

Regardless of what we hear in the popular press, it is interesting to see what scientists themselves find most troublesome in their various fields. And it is interesting to note that many of the issues revolve around the review and publishing process.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Monday December 29 2014, @05:21AM

    by Anonymous Coward on Monday December 29 2014, @05:21AM (#129870)

    No one really cares about reproducibility of results until they need these results for their own research. And no, they will not be replicating an experiment most of the time, but trying to get the similar results as described in the original paper in context of their own experiment.

    Bad papers are everywhere, so sometimes you end up wasting time and effort because someone else failed to understand what they are writing about. Needless to say, all papers from untrusted sources (reputation of institutions) are treated with suspicion until verified by other similar papers. That's what you learn quickly in research.

    So duplicating an experiment for the sake of duplicating an experiment is a waste of time. You only ever want to do this if someone starts to claim "Cold Fusion" or similar breakthrough - those will be tried to be duplicated and replicated. If no one finds your research useful, then it doesn't matter if your research is reproducible. If your research is useful, generally it will be reproduced by someone somewhere, and if they fail, they will want to know why.

    The article goes on to discuss problems with published results, retractions, and the trustworthiness of published science in general.

    Yes, almost no one publishes negative results. But if someone publishes BS, on important subject, there will be pressure from others to retract it. A retraction is like publishing a negative result, by the author themselves, and it's an admittance of a screw up.

  • (Score: 3, Interesting) by MichaelDavidCrawford on Monday December 29 2014, @05:42AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Monday December 29 2014, @05:42AM (#129878) Homepage Journal

    While there is some point to repeating the first experiment, it's quite a lot better to reproduce its results through an entirely different experiment.

    Consider that Millikan won the Physics Nobel for demonstrating that electric charge is quantized. He didn't get the prize for determining its actual value; he was off by miles as a result of an error in his measurement of the viscosity of air.

    I don't know how the proper value of the electron charge was measured, but hypothetically one could perform a somewhat similar experiment with an aluminum pellet in a strong magnetic field, rather than an oil drop falling through air.

    While aluminum is not attracted to magnets, it experiences a viscous-like resistance when it moved across magnetic fields as a result of electrical currents induced in the body of the aluminum by the magnet.

    With some though I expect I could come up with something a lot better, but that's what I came up with without much thought.

    The point is that the explanation for a published result might be a phenomenon that the first researcher failed to take into account. For example the ultimate cause of what at first appeared to be superluminal neutrino velocity, turned out to be bad cable. It is for that specific reason that I was taught to record the serial numbers of all the equipment I used in my undergraduate physics experiments at Caltech.

    The big problem with something like a cable is that it won't have a serial number. Perhaps there is corrosion on a connector. If you put the cable back in the supply cabinet, you might never figure out why you got the result you did.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 3, Interesting) by physicsmajor on Monday December 29 2014, @06:59AM

      by physicsmajor (1471) on Monday December 29 2014, @06:59AM (#129889)

      I agree, the CNRC is completely wrong, but I'm going to make a slightly different point.

      From the friendly summary, "Often, results can't be reproduced, without exact replication of the experiment. One such case came down to how a solution was stirred at a critical stage."

      This is exactly why we need all such experiments to be reproduced. Before publication! Because all too often, the solution stirring method or equivalent is not specified in the original paper! The authors may well have attributed their results to the Coriolis effect, when their stirrers always went clockwise. Another lab can't reproduce with counterclockwise stirrers. The original study is not science, and is in fact completely worthless to society because it can never be reproduced.

      The whole business reminds me of that activity where you have kids try to teach a truly naive individual how to make a PB&J sandwich. It's really freaking hard, if they're only doing precisely what you say. Yet this is exactly the problem; reviewers see stuff that "looks good" and it goes into print. Without even making a token effort for reproduction. Even in fields where it should be entirely possible to get and inspect deterministic examples and source code, like image processing, it's rarely done.

      Yes, it'll be hard. Too freaking bad on losing the corner cutting races. The current state is an inexcusable waste of resources.

      • (Score: 2, Interesting) by MichaelDavidCrawford on Monday December 29 2014, @10:40AM

        by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Monday December 29 2014, @10:40AM (#129915) Homepage Journal

        In his case, two officers stood back to back. One of them explained to the other how to change the batteries in a flashlight. The one with the flashlight was told to follow the instructions to the letter.

        "Unscrew the reflector from the body of the flashlight."

        "OK."

        The batteries fall on the floor, because he wasn't instructed to point the flashlight upward.

        Richard Feynman told us that it was quite difficult to make superconductors as described by their original inventor. It wasn't just the chemical composition, but the metallurgy - the temperatures the material was subjected to, how the wire was drawn and so on.

        That was a problem as well when "high temperature" (actually liquid nitrogen) superconductors were created.

        In both cases though I don't think the experimenter was neglectful, they just didn't understand what they did that was special.

        --
        Yes I Have No Bananas. [gofundme.com]
        • (Score: 1) by WillAdams on Monday December 29 2014, @02:26PM

          by WillAdams (1424) on Monday December 29 2014, @02:26PM (#129954)

          Yep, doing instructions can be quite hard --- been struggling w/ that w/ the documentation for the Shapeoko:

          http://docs.shapeoko.com/ [shapeoko.com]

          The diagrams are interactive, and allow one to highlight parts, even when hidden.

      • (Score: 2) by frojack on Monday December 29 2014, @08:55PM

        by frojack (1554) Subscriber Badge on Monday December 29 2014, @08:55PM (#130036) Journal

        I agree, the CNRC is completely wrong,

        You clearly haven't read what the CNRC said. Far from being completely wrong, what that paper says is completely supportive of your "slightly different" point.

        Attempting to exactly reproduce an experiment step for step, accident by accident, is pointless. You end up with TWO papers that suggest results which are not generally reproducible in the real world. You end up verifying an accident. You thereby nudge something from an unverified result to "settled science".

        If stirred instead of shaken makes THAT much difference you want to know that up front. In the mentioned case, neither lab though the stirring made any difference, and it took then TWO Years to track down the difference.

        The paper makes the case that you should try to reproduce the results, not the exact experiment. That is the quickest way to knocking down a faulty experiment, or propping up a good one.

        --
        No, you are mistaken. I've always had this sig.
  • (Score: 4, Interesting) by gman003 on Monday December 29 2014, @06:24AM

    by gman003 (4155) on Monday December 29 2014, @06:24AM (#129885)

    The core problem with studies never being reproduced is that there is so little reward for doing so, that most professionals never have cause to even make the attempt. Thus there are many running new studies, but few revisiting old studies.

    However, postgraduate degrees are, naturally, far rarer than lesser degrees. We can use this to our advantage.

    A doctorate or master's thesis is supposed to be proof that you are capable not just of mastering the known science of the field, but are able to extend it, to discover wholly new things. A bachelor's thesis is less - simply an intensive study of a particular subject in the field.

    I propose, then that attempting a reproduction of an existing but unverified study become a requirement for bachelor's (and perhaps master's) degrees, in a manner similar to the thesis requirement.

    I recognize that many studies are difficult or even impossible to reproduce with a student's means. The Higgs discovery, for example, would be completely impossible. However, many studies should be reproducible, at least in part. This would at least make a serious dent in the problem.

    • (Score: 2) by opinionated_science on Monday December 29 2014, @01:14PM

      by opinionated_science (4031) on Monday December 29 2014, @01:14PM (#129944)

      The problem is most PhD theses are the work the supervisor needs to do to stay employed.

      Repeating experiments is always valid, because it is simply not possible to know that a new measurement technique may yield a different answer.

      In the field of molecular biology I will wager the increased use of standardized tests (e.g. commercial grade) has improved reproducibility...