Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday May 26 2015, @06:03PM   Printer-friendly
from the return-to-mysticism dept.

Richard Horton writes that a recent symposium on the reproducibility and reliability of biomedical research discussed one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with science (PDF), one of our greatest human creations. The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. According to Horton, editor-in-chief of The Lancet, a United Kingdom-based medical journal, the apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world or retrofit hypotheses to fit their data.

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivized to be right. Instead, scientists are incentivized to be productive and innovative. Tony Weidberg says that the particle physics community now invests great effort into intensive checking and rechecking of data prior to publication following several high-profile errors,. By filtering results through independent working groups, physicists are encouraged to criticize. Good criticism is rewarded. The goal is a reliable result, and the incentives for scientists are aligned around this goal. "The good news is that science is beginning to take some of its worst failings very seriously," says Horton. "The bad news is that nobody is ready to take the first step to clean up the system."


[Editor's Comment: Original Submission]

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by gringer on Tuesday May 26 2015, @07:33PM

    by gringer (962) on Tuesday May 26 2015, @07:33PM (#188216)

    Have a look at this long article to see a few examples of how difficult it is to fix this problem:

    http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/ [slatestarcodex.com]

    My favourite is section IV:

    The idea was to plan an experiment together, with both of them agreeing on every single tiny detail. They would then go to a laboratory and set it up, again both keeping close eyes on one another. Finally, they would conduct the experiment in a series of different batches. Half the batches (randomly assigned) would be conducted by Dr. Schlitz, the other half by Dr. Wiseman. Because the two authors had very carefully standardized the setting, apparatus and procedure beforehand, “conducted by” pretty much just meant greeting the participants, giving the experimental instructions, and doing the staring.

    The results? Schlitz’s trials found strong evidence of psychic powers, Wiseman’s trials found no evidence whatsoever.

    Take a second to reflect on how this makes no sense. Two experimenters in the same laboratory, using the same apparatus, having no contact with the subjects except to introduce themselves and flip a few switches – and whether one or the other was there that day completely altered the result. For a good time, watch the gymnastics they have to do to in the paper to make this sound sufficiently sensical to even get published. This is the only journal article I’ve ever read where, in the part of the Discussion section where you’re supposed to propose possible reasons for your findings, both authors suggest maybe their co-author hacked into the computer and altered the results.

    The problem seemed to be that the scientist's preconceptions were altering the outcome of their study. Trying to "fix" that would require that you carry out your experiments in the absence of a hypothesis to be tested, which goes against one of the core steps in the scientific method.

    The conclusion of the writer of the linked article is that the majority of science wouldn't be able to stand up to the rigour that is required to discount something that is obvious quackery (parapsychology), and we are in fact chasing our own tails in carrying out meta-analyses to try to make good science out of an aggregation of bad science:

    The highest level of the Pyramid of Scientific Evidence is meta-analysis. But a lot of meta-analyses are crap. This meta-analysis got p < 1.2 * 10^-10 for a conclusion I'm pretty sure is false, and it isn’t even one of the crap ones. Crap meta-analyses look more like this, or even worse.

    How do I know it’s crap? Well, I use my personal judgment. How do I know my personal judgment is right? Well, a smart well-credentialed person like James Coyne agrees with me. How do I know James Coyne is smart? I can think of lots of cases where he’s been right before. How do I know those count? Well, John Ioannides has published a lot of studies analyzing the problems with science, and confirmed that cases like the ones Coyne talks about are pretty common. Why can I believe Ioannides’ studies? Well, there have been good meta-analyses of them. But how do I know if those meta-analyses are crap or not? Well…

    Picture of dragon swallowing its own tail, Personal Opinion -> Expert Opinion -> Case Reports -> Cohort Studies -> Randomised Controlled Trials -> Meta Analysis -> Personal Opinion -> ...

    Once you get away from the low-hanging fruit (and sometimes even then), science is a minefield of guesswork.

    --
    Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
    Starting Score:    1  point
    Moderation   +3  
       Interesting=1, Informative=2, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by FatPhil on Wednesday May 27 2015, @11:21AM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Wednesday May 27 2015, @11:21AM (#188553) Homepage
    Thanks for that link - an interesting read. However, I'm perturbed by the repeated links to lesswrong, which is alas thoroughly unreliable. (The guy behind it thinks 0 and 1 are not valid values for probabilities, for example, just because he can't perform certain transformations to such numbers.) That makes me think that I can't take this guy on his word either. /Nullus in verba/ indeed.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 0) by Anonymous Coward on Wednesday May 27 2015, @03:31PM

    by Anonymous Coward on Wednesday May 27 2015, @03:31PM (#188654)

    The problem seemed to be that the scientist's preconceptions were altering the outcome of their study. Trying to "fix" that would require that you carry out your experiments in the absence of a hypothesis to be tested, which goes against one of the core steps in the scientific method.

    The name for that is collecting data... how is that not valid science? I'd say if all the hypothesis you can come up with is "variable A is correlated somehow with variable B" why bother? You aren't going to be able to rule out all the possible explanations. The collecting data part is much more valuable then "testing" some vague hypothesis that won't prove anything.

    • (Score: 2) by gringer on Thursday May 28 2015, @03:56AM

      by gringer (962) on Thursday May 28 2015, @03:56AM (#188938)

      The name for that is collecting data... how is that not valid science?

      It's far too easy to collect data to fit your own preconceived ideas. Yes, I agree that data collection is part of the scientific method, but it's not something that can be done in a perfect fashion.

      --
      Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
      • (Score: 0) by Anonymous Coward on Thursday May 28 2015, @04:23AM

        by Anonymous Coward on Thursday May 28 2015, @04:23AM (#188948)

        It's far too easy to collect data to fit your own preconceived ideas.

        That's fine though, since other people will have different ideas to compare with the data. As long as the data is reported in enough detail I don't see where the problem arises.

        You need to stamp collect before it is possible to devise a real theory, I think you agree and I can see little room for disagreement on that point. My second claim is that testing a theory that predicts something vague like A is correlated with B doesn't really add anything to the discussion. If you don't detect a correlation you can just conclude 'need more data', if you do see one there will be any number of alternates to consider. So it also seems straightforward that testing such theories is a fools errand.

  • (Score: 2) by darkfeline on Wednesday May 27 2015, @10:31PM

    by darkfeline (1030) on Wednesday May 27 2015, @10:31PM (#188813) Homepage

    Or maybe psychic powers only work when in the presence of people who possess "amplifier" powers?

    I'm not a proponent of parapsychology, but there's still a hell of a lot we don't know, and the scientific method works.

    --
    Join the SDF Public Access UNIX System today!