Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday December 23 2018, @03:55PM   Printer-friendly
from the Science-Interpretation-Guide dept.

https://www.bmj.com/content/363/bmj.k5094

https://www.npr.org/sections/health-shots/2018/12/22/679083038/researchers-show-parachutes-dont-work-but-there-s-a-catch

A study has been done, and the surprising result is that parachutes are no more effective than a backpack in preventing injuries when jumping out of an airplane.

It's "common sense" that parachutes work, so it has been a neglected field of science. This surprising and counter-intuitive result is an excellent example of the importance of doing science.

... or maybe it's a perfect example of how top-line study headlines can be mis-representative, especially when portrayed by the mass-media, and how understanding study scope and methodology is important.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Tuesday December 25 2018, @06:18AM (5 children)

    by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @06:18AM (#778274) Journal

    From an outside perspective, your posts just appear dismissive. The kind of dismissive attitude that people use to make themselves feel superior to those they put-down or appear higher-status because of their cynicism.

    You have a point to that? Let's give you an inside perspective [economist.com]. For example:

    Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

    Sorry, your "outside perspective" is ignorant. There are deep, decades-old problems in most fields of science. It's not going to get better because someone whines that the criticism is presented in a imaginary, dismissive manner, or as the earlier AC whines, because few who are part of the problem will listen to the criticism. All we can do at this point is spread awareness.

    As to Null Hypothesis Significance Testing, the key thing to remember is that it is a tool for finding initial hypotheses and developing models almost from scratch. If you're still using it, as a number of fields are, decades later after you should have found those hypotheses and models, then you're doing something very wrong. If your research is considered normal despite being NHST on decades old fields, then the field itself is doing something wrong.

  • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @08:45AM (4 children)

    by Anonymous Coward on Tuesday December 25 2018, @08:45AM (#778288)

    You are coming around, but still think there is some validity to NHST. There isn't.

    It is as scientific as praying to think of the right answer.

    • (Score: 1) by khallow on Tuesday December 25 2018, @03:33PM (3 children)

      by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @03:33PM (#778331) Journal
      My position [soylentnews.org] hasn't changed.

      I believe you misinterpret the loyal opposition here. The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building. No one is arguing that p-hacking and other failure modes of NHST don't happen, but the technique has a legitimate use.

      What's relevant here is that NHST is supposed to be a temporary technique. You mine your data, find possible correlations, and build models from there. You shouldn't use NHST forever, because both of its flaws - the p-hacking trap and its natural inefficiency, but also because you supposedly have models to test now. The growing use of NHST over the past century indicates that there are a number of fields that simply aren't progressing on to model building, instead stalling at the NHST stage.

      I will agree that if you're merely interested in the appearance of doing science rather than actually making progress, then NHST is a great technique for looking busy. So heavy, long term use of NHST is a warning sign that we are doing things seriously wrong.

      • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @06:22PM (2 children)

        by Anonymous Coward on Tuesday December 25 2018, @06:22PM (#778381)

        The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building.

        This doesnt explain what you think the NHST step is supposed to contribute. You don't do NHST here. You describe/explore the data or clean it and throw it into some machine learning algo (depending on the goal) where you choose a model based on out-of-sample predictive skill.

        • (Score: 1) by khallow on Tuesday December 25 2018, @06:43PM (1 child)

          by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @06:43PM (#778382) Journal

          The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building.

          This doesnt explain what you think the NHST step is supposed to contribute.

          That sentence wasn't supposed to explain, this sentence was:

          You mine your data, find possible correlations, and build models from there.

          You describe/explore the data or clean it and throw it into some machine learning algo (depending on the goal) where you choose a model based on out-of-sample predictive skill.

          NHST is a machine learning algorithm. Perhaps one could swap it out for a much better algorithm, but a key problem with any such approach from research purposes, is that it needs to generate a testable model that you understand in the end. For example, I can generate some rather opaque genetic algorithm for modeling phenomena, but it'd be work to figure out whether the model is modeling something real or a loophole that I haven't found yet. NHST spits out correlations that you can test right away.

          • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @08:13PM

            by Anonymous Coward on Tuesday December 25 2018, @08:13PM (#778399)

            NHST spits out binary conclusions and involves concluding something beyond "the null model doesnt fit". It very correctly can tell you when the model doesnt fit.

            Also, you can calculate p-values and use them without NHST... NHST != "Hypothesis testing" != "Significance Testing": https://arxiv.org/pdf/1603.07408.pdf [arxiv.org]

            An interesting thing is that I've seen Neyman, Pearson, and Gosset ("Student") lapse into NHST, but never Fisher. He is always very careful not to get confused between the "research hypothesis" and the "statistical hypothesis".