https://www.bmj.com/content/363/bmj.k5094
A study has been done, and the surprising result is that parachutes are no more effective than a backpack in preventing injuries when jumping out of an airplane.
It's "common sense" that parachutes work, so it has been a neglected field of science. This surprising and counter-intuitive result is an excellent example of the importance of doing science.
... or maybe it's a perfect example of how top-line study headlines can be mis-representative, especially when portrayed by the mass-media, and how understanding study scope and methodology is important.
(Score: 1, Informative) by Anonymous Coward on Monday December 24 2018, @01:30AM (3 children)
I would never design a series of studies like this to begin with. The plan is optimized for producing the maximum number of papers rather than learning about cancer and curing it as quickly as possible.
I would study many untreated mice under various conditions and then come up with at least one quantitative model that could fit the observations. Eg, we coud fit to incidence of various stages by age, something we also have for humans. I think SEER has data by stage, but they definitely have overall incidence of many cancer types.
Just guessing, but the curve be affected by mutation rate of cells in that tissue, rate of clearance by the immune surveillance, apoptosis rate, number of mutations required for detectable cancer, division rate in that tissue, number of cells in the tissue, calories consumed, etc. Some of these parameters may be degenerate or even caused by one another, so it would have to all be worked out.
NB: that also requires measuring all those parameters carefully in the normal mice, I bet there isn't even good data on how many cells there typically are in each tissue at a given age... because everyone has been wasting time with NHST.
Only then would I think about how Drug X and Drug Y, etc are supposed to work and which parameter(s) of the model they should affect and how. Once I have made my predictions, I would run the study while giving the drugs and see if the parameters of the best model fit changed in line with the predictions. If they do, then I would think I had a handle on how the prospective treatment was working. Of course, any major side effects should be predicted and accounted for by the model as well.
I'd say, who knows why the variance was higher in the first case? It could be due to the drug, or they just messed up the experiment somehow, or just "random" stuff.
Also, not sure if you realize this, but if there really was no effect of the drug on cancer stage both p-values would be equally likely). In R:
https://i.ibb.co/THQ4QWW/null-True-Pdist.png [i.ibb.co]
(Score: 1, Informative) by Anonymous Coward on Monday December 24 2018, @01:35AM
To clarify my own post:
This isnt really true since the sampling distribution could be different for some other reason (as suggested by the previous sentence). It is more correct to say "no difference in the populations each sample came from". Also, further reading:
https://stats.stackexchange.com/questions/10613/why-are-p-values-uniformly-distributed-under-the-null-hypothesis [stackexchange.com]
(Score: 0) by Anonymous Coward on Monday December 24 2018, @01:44AM (1 child)
So you think that Paper A was more likely to have messed something up or didn't properly account for an experimental variable, but you still think it is equally likely as Paper B?
(Score: 0) by Anonymous Coward on Monday December 24 2018, @01:53AM
No, it could be B that messed up, or neither (could just be irreducible variation in the mouse system), or both. But sure, something was different between the two studies.