Scott Gottlieb, President Trump's nominee to run the FDA, is a proponent of adaptive clinical trials, which would allow adjustments of trials as they are ongoing:
In 2006, Scott Gottlieb, then a deputy commissioner at the U.S. Food and Drug Administration (FDA), stood before an audience of clinicians and researchers to sing the praises of a new approach to drug trials. Instead of locking in a study's design from the start, researchers could build in options that would allow them to adjust along the way, based on the data they had collected. They could make the trial larger or smaller, for instance, add or remove arms, or change how incoming patients get assigned to them. Gottlieb predicted such adaptive trial designs, the topic of the conference he attended that distant summer in Washington, D.C., would "tell us more about safety and benefits of drugs, in potentially shorter time frames."
This week, as President Donald Trump's nominee to head FDA, Gottlieb sat before Republican lawmakers hungry for promises of "shorter time frames" for drug and device approvals, and again expressed his zeal—repeatedly—for adaptive trial designs. If confirmed to be FDA's head, as expected, Gottlieb suggested he'd promote wider use of the approach.
But for all their promise, many adaptive trial features still aren't commonplace. And Gottlieb will face a number of obstacles to encouraging their wider use, experts tell ScienceInsider.
(Score: 2) by AthanasiusKircher on Monday April 10 2017, @05:53PM (1 child)
Does this really qualify as "p-hacking"? P-hacking generally implies taking existing data and trying out oodles of possible statistical correlations to find one (or more) that appear to have a "significant" result. (Or other equivalent data analysis methods than obscure the experimental design and data collection in such a way as to overestimate significance.)
But this isn't a post-hoc statistical analysis procedure. This is actively manipulating the study and data collection while in progress. I think this actually falls on a different level of study manipulation than p-hacking (which is generally referring to the analysis phase).
There may be times when an adaptive procedure is justified, but each adaptation generally decreases statistical power and makes it less likely the study the results will mean anything. A rigorous statistical analysis should actually determine that most adaptive studies have LESS significant results, which would be the opposite of p-hacking. (Note that I'm relying on the idea that researchers would honestly report their method and results.)
The main way to combat p-hacking is to specify analysis methods in advance, rather than allowing ad hoc data manipulation after-the-fact. The effects of potential adaptation may also be able to be quantified in advance (depending on exactly what sort of adaptation is allowed), along with their subsequent effect on the statistical power. But a better design might generally be to use an adaptive trial to identify potential good treatment procedures and then design a more rigorous second study to verify with a locked-down a priori procedure and data analysis method.
But I'm assuming your point was that a bad use of adaptive design is to fudge the experimental method as you go but analyze the data as if nothing weird happened (which might inflate significance). Yeah, that's the way I'd be afraid of drug companies "cheating" too, though I don't know if I'd call that p-hacking in the normal sense. That would, to me, qualify as more active direct study manipulation, more easily rising to the level of deliberate professional misconduct. If this is done correctly, the parameters of adaptive studies would need to be built into the proposed study design in advance, as well as their potential impact on the way the results are handled.
(Score: 2) by FatPhil on Tuesday April 11 2017, @08:39AM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves