Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Sunday December 23 2018, @03:55PM   Printer-friendly
from the Science-Interpretation-Guide dept.

https://www.bmj.com/content/363/bmj.k5094

https://www.npr.org/sections/health-shots/2018/12/22/679083038/researchers-show-parachutes-dont-work-but-there-s-a-catch

A study has been done, and the surprising result is that parachutes are no more effective than a backpack in preventing injuries when jumping out of an airplane.

It's "common sense" that parachutes work, so it has been a neglected field of science. This surprising and counter-intuitive result is an excellent example of the importance of doing science.

... or maybe it's a perfect example of how top-line study headlines can be mis-representative, especially when portrayed by the mass-media, and how understanding study scope and methodology is important.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0, Disagree) by Anonymous Coward on Sunday December 23 2018, @06:49PM (33 children)

    by Anonymous Coward on Sunday December 23 2018, @06:49PM (#777876)

    Its not insightful, they missed their chance to show the real issue. The only reason there was no 'significant difference' is small sample size. Eventually there would be more broken legs, heart attacks, etc in one group vs the other because the extra weight WILL have some effect.

    Starting Score:    0  points
    Moderation   0  
       Disagree=1, Total=1
    Extra 'Disagree' Modifier   0  

    Total Score:   0  
  • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @08:38PM (30 children)

    by Anonymous Coward on Sunday December 23 2018, @08:38PM (#777888)

    Why don't you write an insightful response that shows "the real issue" and stop complaining about it here until you do?
    Please submit the response URL as an update to this story or as its own submission if you get further replies.

    Here is a link to the BMJ "Rapid response" form so you can reply to the parachute article:
    https://www.bmj.com/content/363/bmj.k5094/submit-a-rapid-response [bmj.com]

    • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @09:05PM (29 children)

      by Anonymous Coward on Sunday December 23 2018, @09:05PM (#777894)

      Because its a waste of time, I just explained it to you in one sentence. Others have been saying the same thing since at least the 1960s, thousands or tens of thousands of papers have explained it over and over. Start here, for example from a president of the APA: http://meehl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf [umn.edu]

      Everyone that matters capable of understanding, who wants to understand, already does. Too bad the knowledge seems to be mostly used by people who stay involved to run more efficient scams (cheaper p-hacking, etc) rather than kill the golden goose of endless publications.

      • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @09:20PM (21 children)

        by Anonymous Coward on Sunday December 23 2018, @09:20PM (#777899)

        So it is not funny because it is such an important and serious problem; however you are unwilling to waste your precious time for it. That precious time is, instead, used to complain over and over again here any time a science story hits the front page.

        • (Score: -1, Redundant) by Anonymous Coward on Sunday December 23 2018, @09:27PM (20 children)

          by Anonymous Coward on Sunday December 23 2018, @09:27PM (#777904)

          People outside the field need to made aware, because no fix is coming from within. The only thing that will work is to stop giving money for people to do this. Since it takes very little effort to post here, why not? Ive had maybe a dozen people thank me over the years for this info.

          • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @09:56PM (19 children)

            by Anonymous Coward on Sunday December 23 2018, @09:56PM (#777920)

            People outside the field need to made aware, because no fix is coming from within

            Do you honestly believe that spreading awareness on SoylentNews is going to drive the change in academic biomedical research?

            If you are only willing to spend "very little effort" on such a serious and important problem, then maybe it is not such a serious or important problem.

            If you're afraid of making a comment that might be scrutinized by actual experts reading the BMJ, then how about PubPeer:
            https://pubpeer.com/publications/02413F8BB61C17C17C20C88D60632A [pubpeer.com]

            • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @10:10PM (18 children)

              by Anonymous Coward on Sunday December 23 2018, @10:10PM (#777923)

              I already put forth the max effort interacting with bio-trained people. I did all the normal bs they usually do plus actual science with quantitative models that made precise predictions, and getting other peoples data to check the predictions, proving it could be done when they said biomedical research is "too complex" for that. Result: "so... was there a significant difference?"

              They dont care dude. Worse, most dont want to hear it because it is 10x harder than what they are getting away with. It is actually effectively impossible for real science to compete along the number of publications metric because of that. Like some version of Gresham's Law, all the good science is getting hoarded in classified programs and trade secrets.

              And no I dont only post this on soylent news, or think it is anything more than me treating people the way I wish to be treared.

              • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @10:52PM (17 children)

                by Anonymous Coward on Sunday December 23 2018, @10:52PM (#777939)

                think it is anything more than me treating people the way I wish to be treared

                From an outside perspective, your posts just appear dismissive. The kind of dismissive attitude that people use to make themselves feel superior to those they put-down or appear higher-status because of their cynicism.

                If you aren't the same person as the "CRISPR isn't real because it just kills cells" or the "all science is false because of NHST", then I apologize for the mix-up.

                • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @11:07PM (10 children)

                  by Anonymous Coward on Sunday December 23 2018, @11:07PM (#777942)

                  Amazing how the NHST supporters can only ever seem to argue against strawmen. Crispr is real and does selectively kill cells, and NHST has nothing to do with science.

                  As I said, some people seem to be incapable of getting it. I usually assume its due to some bad training, but maybe there is something deeper going on there with an entire mindset. Those people aren't my audience.

                  • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @11:48PM (9 children)

                    by Anonymous Coward on Sunday December 23 2018, @11:48PM (#777951)

                    It seems you are the person I thought you were.

                    I don't know what is so difficult for you to understand: NHST is treated as a floor. If the data can't even pass a NHST, then it is beyond unreliable. Passing a NHST is not evidence against a hypothesis, like you seem to treat it, it is weak evidence that the data isn't complete garbage.

                    • (Score: 0) by Anonymous Coward on Monday December 24 2018, @12:06AM (5 children)

                      by Anonymous Coward on Monday December 24 2018, @12:06AM (#777956)

                      Passing NHST is not primarily used like you say, and anyway I "dont understand" that because that would be wrong.

                      Unless you actually believe the null model may be true, its completely controlled by sample size (you will always get significance if you spend enough). It doesnt matter if the data is garbage or not.

                      I mean look at this study where they "didnt pass NHST", do you see people concluding the data is beyond unreliable? No we dont, proving your claim is just wrong.

                      Trust me. There is no legimate use for NHST (when testing a strawman). I have searched everywhere, the only thing people use it for is to subsequently commit one or more logical fallacies like yours here. And yours is hilariously opposite of the the one comitted in the parachute paper. Contradictions like this can only happen because NHST is based on a logical fallacy.

                      • (Score: 0) by Anonymous Coward on Monday December 24 2018, @12:58AM (4 children)

                        by Anonymous Coward on Monday December 24 2018, @12:58AM (#777968)

                        It is pretty funny that you are using an obviously satirical paper as evidence, but I will concede that I did not state things clearly as I meant for positive claims. Here's an example:

                        Paper A:
                        Main claim - Drug X can inhibit breast cancer progression in mice.
                        Evidence - Two groups (Mock-treated and Drug X-treated) with 20 mice each. Average cancer stage of Drug X-treated mice was lower (by one stage) than Mock-treated, p=0.3.

                        Paper B:
                        Main claim - Drug Y can inhibit breast cancer progression in mice.
                        Evidence - Two groups (Mock-treated and Drug Y-treated) with 20 mice each. Average cancer stage of Drug Y-treated mice was lower (by one stage) than Mock-treated, p0.001.

                        Journals would typically reject "Paper A" because their survival study didn't even meet the commonly used threshold of whichever statistical test they used. In other words, their evidence isn't even internally consistent and/or their methods of data collection were not precise enough to discriminate between their groups. Now, you might say, "if they used n=1000 per group, then they would see an effect", but that would also likely be dismissed as not "biologically significant" because if your power test says you need an n=1000 genetically identical mice with genetically identical parental tumor lines, then your effect size is so small that it does not have a biologically meaningful result and even less likely to translate into humans.

                        If you completely discount NHST then you would say that Drug X is equally likely, given what is provided and assume that everything is equal except the distribution of data, to inhibit breast cancer progression as Drug Y. Which of the claims is more likely to be true?

                        • (Score: 1, Informative) by Anonymous Coward on Monday December 24 2018, @01:30AM (3 children)

                          by Anonymous Coward on Monday December 24 2018, @01:30AM (#777976)

                          Paper A:
                          Main claim - Drug X can inhibit breast cancer progression in mice.
                          Evidence - Two groups (Mock-treated and Drug X-treated) with 20 mice each. Average cancer stage of Drug X-treated mice was lower (by one stage) than Mock-treated, p=0.3.

                          Paper B:
                          Main claim - Drug Y can inhibit breast cancer progression in mice.
                          Evidence - Two groups (Mock-treated and Drug Y-treated) with 20 mice each. Average cancer stage of Drug Y-treated mice was lower (by one stage) than Mock-treated, p0.001.

                          I would never design a series of studies like this to begin with. The plan is optimized for producing the maximum number of papers rather than learning about cancer and curing it as quickly as possible.

                          I would study many untreated mice under various conditions and then come up with at least one quantitative model that could fit the observations. Eg, we coud fit to incidence of various stages by age, something we also have for humans. I think SEER has data by stage, but they definitely have overall incidence of many cancer types.

                          Just guessing, but the curve be affected by mutation rate of cells in that tissue, rate of clearance by the immune surveillance, apoptosis rate, number of mutations required for detectable cancer, division rate in that tissue, number of cells in the tissue, calories consumed, etc. Some of these parameters may be degenerate or even caused by one another, so it would have to all be worked out.

                          NB: that also requires measuring all those parameters carefully in the normal mice, I bet there isn't even good data on how many cells there typically are in each tissue at a given age... because everyone has been wasting time with NHST.

                          Only then would I think about how Drug X and Drug Y, etc are supposed to work and which parameter(s) of the model they should affect and how. Once I have made my predictions, I would run the study while giving the drugs and see if the parameters of the best model fit changed in line with the predictions. If they do, then I would think I had a handle on how the prospective treatment was working. Of course, any major side effects should be predicted and accounted for by the model as well.

                          If you completely discount NHST then you would say that Drug X is equally likely, given what is provided and assume that everything is equal except the distribution of data, to inhibit breast cancer progression as Drug Y. Which of the claims is more likely to be true?

                          I'd say, who knows why the variance was higher in the first case? It could be due to the drug, or they just messed up the experiment somehow, or just "random" stuff.

                          Also, not sure if you realize this, but if there really was no effect of the drug on cancer stage both p-values would be equally likely). In R:

                          sim <- function(){
                            a = rnorm(20, 0)
                            b = rnorm(20, 0)
                            return(t.test(a,b, var.equal = F)$p.value)
                          }

                          set.seed(1234)
                          res = replicate(1e4, sim())

                          hist(res, breaks = 20, col = "Grey")

                          https://i.ibb.co/THQ4QWW/null-True-Pdist.png [i.ibb.co]

                          • (Score: 1, Informative) by Anonymous Coward on Monday December 24 2018, @01:35AM

                            by Anonymous Coward on Monday December 24 2018, @01:35AM (#777977)

                            To clarify my own post:

                            if there really was no effect of the drug on cancer stage both p-values would be equally likely

                            This isnt really true since the sampling distribution could be different for some other reason (as suggested by the previous sentence). It is more correct to say "no difference in the populations each sample came from". Also, further reading:
                            https://stats.stackexchange.com/questions/10613/why-are-p-values-uniformly-distributed-under-the-null-hypothesis [stackexchange.com]

                          • (Score: 0) by Anonymous Coward on Monday December 24 2018, @01:44AM (1 child)

                            by Anonymous Coward on Monday December 24 2018, @01:44AM (#777983)

                            It could be due to the drug, or they just messed up the experiment somehow, or just "random" stuff.

                            So you think that Paper A was more likely to have messed something up or didn't properly account for an experimental variable, but you still think it is equally likely as Paper B?

                            • (Score: 0) by Anonymous Coward on Monday December 24 2018, @01:53AM

                              by Anonymous Coward on Monday December 24 2018, @01:53AM (#777987)

                              No, it could be B that messed up, or neither (could just be irreducible variation in the mouse system), or both. But sure, something was different between the two studies.

                    • (Score: 0) by Anonymous Coward on Monday December 24 2018, @12:59AM (2 children)

                      by Anonymous Coward on Monday December 24 2018, @12:59AM (#777969)

                      This is exactly how NHST is commonly used:

                      Our groundbreaking study found no statistically significant difference in the primary outcome between the treatment and control arms. Our findings should give momentary pause to experts who advocate for routine use of parachutes for jumps from aircraft in recreational or military settings.
                      [...]
                      Should our results be reproduced in future studies, the end of routine parachute use during jumps from aircraft could save the global economy billions of dollars spent annually to prevent injuries related to gravitational challenge.

                      https://www.bmj.com/content/363/bmj.k5094 [bmj.com]

                      So if they keep getting "no significance", then people should stop using parachutes when jumping from aircraft.

                      • (Score: 0) by Anonymous Coward on Monday December 24 2018, @01:10AM (1 child)

                        by Anonymous Coward on Monday December 24 2018, @01:10AM (#777970)

                        *When that aircraft is resting on the ground and motionless.

                        • (Score: 0) by Anonymous Coward on Monday December 24 2018, @01:40AM

                          by Anonymous Coward on Monday December 24 2018, @01:40AM (#777981)

                          Billions of dollars are spent annually on people jumping from landed motionless airplanes with parachutes?

                          I don't think they meant to avoid extrapolating to flying airplanes, but even if they had meant to limit the conclusion to stationary ones it would still be fallacious reasoning.

                • (Score: 1) by khallow on Tuesday December 25 2018, @06:18AM (5 children)

                  by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @06:18AM (#778274) Journal

                  From an outside perspective, your posts just appear dismissive. The kind of dismissive attitude that people use to make themselves feel superior to those they put-down or appear higher-status because of their cynicism.

                  You have a point to that? Let's give you an inside perspective [economist.com]. For example:

                  Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.

                  Sorry, your "outside perspective" is ignorant. There are deep, decades-old problems in most fields of science. It's not going to get better because someone whines that the criticism is presented in a imaginary, dismissive manner, or as the earlier AC whines, because few who are part of the problem will listen to the criticism. All we can do at this point is spread awareness.

                  As to Null Hypothesis Significance Testing, the key thing to remember is that it is a tool for finding initial hypotheses and developing models almost from scratch. If you're still using it, as a number of fields are, decades later after you should have found those hypotheses and models, then you're doing something very wrong. If your research is considered normal despite being NHST on decades old fields, then the field itself is doing something wrong.

                  • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @08:45AM (4 children)

                    by Anonymous Coward on Tuesday December 25 2018, @08:45AM (#778288)

                    You are coming around, but still think there is some validity to NHST. There isn't.

                    It is as scientific as praying to think of the right answer.

                    • (Score: 1) by khallow on Tuesday December 25 2018, @03:33PM (3 children)

                      by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @03:33PM (#778331) Journal
                      My position [soylentnews.org] hasn't changed.

                      I believe you misinterpret the loyal opposition here. The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building. No one is arguing that p-hacking and other failure modes of NHST don't happen, but the technique has a legitimate use.

                      What's relevant here is that NHST is supposed to be a temporary technique. You mine your data, find possible correlations, and build models from there. You shouldn't use NHST forever, because both of its flaws - the p-hacking trap and its natural inefficiency, but also because you supposedly have models to test now. The growing use of NHST over the past century indicates that there are a number of fields that simply aren't progressing on to model building, instead stalling at the NHST stage.

                      I will agree that if you're merely interested in the appearance of doing science rather than actually making progress, then NHST is a great technique for looking busy. So heavy, long term use of NHST is a warning sign that we are doing things seriously wrong.

                      • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @06:22PM (2 children)

                        by Anonymous Coward on Tuesday December 25 2018, @06:22PM (#778381)

                        The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building.

                        This doesnt explain what you think the NHST step is supposed to contribute. You don't do NHST here. You describe/explore the data or clean it and throw it into some machine learning algo (depending on the goal) where you choose a model based on out-of-sample predictive skill.

                        • (Score: 1) by khallow on Tuesday December 25 2018, @06:43PM (1 child)

                          by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @06:43PM (#778382) Journal

                          The point of NHST is to do science in situations where you have a pile of data and don't know enough to do the usual hypothesis and model building.

                          This doesnt explain what you think the NHST step is supposed to contribute.

                          That sentence wasn't supposed to explain, this sentence was:

                          You mine your data, find possible correlations, and build models from there.

                          You describe/explore the data or clean it and throw it into some machine learning algo (depending on the goal) where you choose a model based on out-of-sample predictive skill.

                          NHST is a machine learning algorithm. Perhaps one could swap it out for a much better algorithm, but a key problem with any such approach from research purposes, is that it needs to generate a testable model that you understand in the end. For example, I can generate some rather opaque genetic algorithm for modeling phenomena, but it'd be work to figure out whether the model is modeling something real or a loophole that I haven't found yet. NHST spits out correlations that you can test right away.

                          • (Score: 0) by Anonymous Coward on Tuesday December 25 2018, @08:13PM

                            by Anonymous Coward on Tuesday December 25 2018, @08:13PM (#778399)

                            NHST spits out binary conclusions and involves concluding something beyond "the null model doesnt fit". It very correctly can tell you when the model doesnt fit.

                            Also, you can calculate p-values and use them without NHST... NHST != "Hypothesis testing" != "Significance Testing": https://arxiv.org/pdf/1603.07408.pdf [arxiv.org]

                            An interesting thing is that I've seen Neyman, Pearson, and Gosset ("Student") lapse into NHST, but never Fisher. He is always very careful not to get confused between the "research hypothesis" and the "statistical hypothesis".

      • (Score: 1, Interesting) by Anonymous Coward on Sunday December 23 2018, @09:27PM (6 children)

        by Anonymous Coward on Sunday December 23 2018, @09:27PM (#777903)

        As always, the problem is capitalism. Capitalist "science" (p-hacking [xkcd.com], small sample size, extrapolation errors [xkcd.com], look at the recent artificial sweetener study with a good sample size but questionably throwing all kinds of different zero-calorie sweeteners [was stevia even studied?] into one amorphous group and now we gotta wonder if the study was funded by the sugar lobby, etc) is not science. However, I think that the study in TFA is a lighthearted way of poking fun at the serious problems capitalism causes for honest science.

        • (Score: 0, Disagree) by Anonymous Coward on Sunday December 23 2018, @09:31PM (4 children)

          by Anonymous Coward on Sunday December 23 2018, @09:31PM (#777907)

          Lol, no this is rampant amongst government run science. In fact its rise coincided with the rise of government funded science after WWII. I think its more just an issue of giving too many people PhDs, so the gatekeeping methods failed.

          • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @10:56PM (3 children)

            by Anonymous Coward on Sunday December 23 2018, @10:56PM (#777941)

            Ah, the good ol' days when biology was done without any statistics and was mostly qualitative.
            Yes, the days before DNA was discovered to encode genetic information were clearly superior and we have learned nothing of value since./sarcasm

            • (Score: 0) by Anonymous Coward on Sunday December 23 2018, @11:15PM

              by Anonymous Coward on Sunday December 23 2018, @11:15PM (#777944)

              Sorry but you are clearly unfamiliar with the literature. And being "quantitative" isnt about statisitics, its about coming up with a precise prediction. Eg, if the krebs cycle is correct, then molecule x should be found in ratio z to molecule y.

              Statistics can be used to check how good the fit is, but that really isnt that much better than just eyeballing the results and comparing to competing predictions. The main use is to give an illusion of rigour when testing a strawman hypothesis no one actually believes, so Id say wed be better off without stats altogether at this point honestly.

            • (Score: 0) by Anonymous Coward on Monday December 24 2018, @12:29AM (1 child)

              by Anonymous Coward on Monday December 24 2018, @12:29AM (#777961)

              Or look at this, laws of neural growth from Ramon y Cajal in ~1900. Only today do we have the computational power to simulate the growth, and it works.

        • (Score: 1) by khallow on Tuesday December 25 2018, @06:23AM

          by khallow (3766) Subscriber Badge on Tuesday December 25 2018, @06:23AM (#778275) Journal

          As always, the problem is capitalism.

          I wonder how long it'll take for people to get a clue [soylentnews.org]?

          I love witnessing this echo chamber of people who can't figure out that the government doing something isnt capitalism. Its like nothing is wrong about what you are saying except you are applying the wrong label since you were somehow taught the wrong definition.

          Capitalist "science" (p-hacking [xkcd.com], small sample size, extrapolation errors [xkcd.com], look at the recent artificial sweetener study with a good sample size but questionably throwing all kinds of different zero-calorie sweeteners [was stevia even studied?] into one amorphous group and now we gotta wonder if the study was funded by the sugar lobby, etc) is not science.

          And all that paid for with government dollars. It's not capitalism when the public pays for it, people.

  • (Score: 5, Funny) by sjames on Sunday December 23 2018, @09:56PM (1 child)

    by sjames (2882) on Sunday December 23 2018, @09:56PM (#777921) Journal

    What's that up in the sky? It's a bird! It's a plane!

    No, it's the point and you missed it!

    • (Score: -1, Redundant) by Anonymous Coward on Sunday December 23 2018, @10:20PM

      by Anonymous Coward on Sunday December 23 2018, @10:20PM (#777925)

      No, I get their point completely. Its just sad there is a need for this, and they only noticed a few common errors amongst many that are being made when using standard practice. So the correct lesson is not learned.