Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday May 26 2015, @06:03PM   Printer-friendly
from the return-to-mysticism dept.

Richard Horton writes that a recent symposium on the reproducibility and reliability of biomedical research discussed one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with science (PDF), one of our greatest human creations. The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. According to Horton, editor-in-chief of The Lancet, a United Kingdom-based medical journal, the apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world or retrofit hypotheses to fit their data.

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivized to be right. Instead, scientists are incentivized to be productive and innovative. Tony Weidberg says that the particle physics community now invests great effort into intensive checking and rechecking of data prior to publication following several high-profile errors,. By filtering results through independent working groups, physicists are encouraged to criticize. Good criticism is rewarded. The goal is a reliable result, and the incentives for scientists are aligned around this goal. "The good news is that science is beginning to take some of its worst failings very seriously," says Horton. "The bad news is that nobody is ready to take the first step to clean up the system."


[Editor's Comment: Original Submission]

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday May 27 2015, @03:31PM

    by Anonymous Coward on Wednesday May 27 2015, @03:31PM (#188654)

    The problem seemed to be that the scientist's preconceptions were altering the outcome of their study. Trying to "fix" that would require that you carry out your experiments in the absence of a hypothesis to be tested, which goes against one of the core steps in the scientific method.

    The name for that is collecting data... how is that not valid science? I'd say if all the hypothesis you can come up with is "variable A is correlated somehow with variable B" why bother? You aren't going to be able to rule out all the possible explanations. The collecting data part is much more valuable then "testing" some vague hypothesis that won't prove anything.

  • (Score: 2) by gringer on Thursday May 28 2015, @03:56AM

    by gringer (962) on Thursday May 28 2015, @03:56AM (#188938)

    The name for that is collecting data... how is that not valid science?

    It's far too easy to collect data to fit your own preconceived ideas. Yes, I agree that data collection is part of the scientific method, but it's not something that can be done in a perfect fashion.

    --
    Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
    • (Score: 0) by Anonymous Coward on Thursday May 28 2015, @04:23AM

      by Anonymous Coward on Thursday May 28 2015, @04:23AM (#188948)

      It's far too easy to collect data to fit your own preconceived ideas.

      That's fine though, since other people will have different ideas to compare with the data. As long as the data is reported in enough detail I don't see where the problem arises.

      You need to stamp collect before it is possible to devise a real theory, I think you agree and I can see little room for disagreement on that point. My second claim is that testing a theory that predicts something vague like A is correlated with B doesn't really add anything to the discussion. If you don't detect a correlation you can just conclude 'need more data', if you do see one there will be any number of alternates to consider. So it also seems straightforward that testing such theories is a fools errand.