Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday May 02 2020, @09:12PM   Printer-friendly
from the festina-lente dept.

With a major pandemic sweeping the world, the standard process of clinical trials for drug approval has come under criticism as a needless source of bureaucracy and delay. Drug discovery chemist Derek Lowe in a blog post explains how clinical trials for drug approval work and the reasons behind the various requirements that the FDA and equivalent organisations around the world generally put in place before approving a new drug. He explains how most of these apparently pointless bureaucratic hurdles are actually there to help protect the integrity of the scientific process and ensure that the human subjects undergoing the trials are treated ethically. While a case can be made for relaxing some of these safeguards, especially in this time of pandemic, it is probably not a good idea to do so without at least understanding what these safeguards are for.

Determining how much of a pharmaceutical is needed to prepare for the trial. Ensuring your are actually preparing just that drug and not a polymorph. Proper laboratory and manufacturing practices to ensure the desired drug is actually prepared without impurities and contaminants. Preparing a plan for a drug trial. Demographics — age, gender, weight, current medications being taken. Getting a representative distribution of these as participants. And there's much more.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Sunday May 03 2020, @05:29AM (2 children)

    by Anonymous Coward on Sunday May 03 2020, @05:29AM (#989701)

    I ran into one of those. I did the blind statistical analysis on the equivalent of preregistered study. Crunch the numbers according to the scope. I got back an email requesting a particular "Tee Test." Now, a t-test of any kind would have been totally inappropriate, especially without being in the registered protocol without justification. I refuse, while copying the appropriate people to cover my ass, and rescind permission to my previous work.

    Months later, I get a call. "Hello, AC. I'm $VERY_IMPORTANT_PERSON. So I'm looking at some stuff here and it looks ... Well ... I'll just put it this way, did you do analysis for $STUDY?" I told them I emailed them months ago about it. Situation diffused, they catch me up. Turns out the person redid the analysis and left out every single comparison measure except for the t-test they asked me about. Turns out that was the only comparison found with significant results if uncorrected. They then lied to cover their ass and tried to throw me and other collaborators under the bus. Ended up costing them everything once their reputation was shot.

    I just laughed so hard when I got a copy of their work. Looked like they followed a walkthrough online of how to do the basics in R and put what that spit out into the document. Nobody cares about your descriptives for unused variables, variable Q-Q plots or the mean of ordinal, non-interval variables. Trying to baffle them with bullshit so they wouldn't notice the meat was missing, I guess. And it probably just ended up calling more attention to the fact it wasn't there.

    Starting Score:    0  points
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 5, Informative) by rleigh on Sunday May 03 2020, @09:18AM (1 child)

    by rleigh (4887) on Sunday May 03 2020, @09:18AM (#989732) Homepage

    I've seen this in academic research as well. I've seen people trying different statistical tests until they find one which is significant. Stuff like using U tests where a T test would be appropriate. "But you can't do that, your data is parametric and it's clearly not significant. Sorry, but your data is simply not showing what you want it to show". "Well, it's publishable if it's significant, and the U test shows significance". Shocking. And this was for medial research as well, so it would have potentially influenced clinical studies. It's no little wonder that so much published life sciences work is later retracted. It's marginal at best. At worst, it's fraudulent. But the pressure to publish is so great, too many people try to bend the rules. It's one of the reasons why I didn't succeed as an academic. I outright refused to publish incomplete or misrepresentative data, which didn't make me popular.

    A friend of mine went to the US to work for the NIH on some postdoc. Couldn't reproduce the previous postdoc's research, but was supposed to continue that line of investigation. After spending months trying to reproduce their results, they basically found that they fabricated everything. They raised this, but the previous postdoc had gone on to a high profile position and start actual clinical trials based upon this fraud. He ended leaving academia; doing the right thing is a career-ending move. No results to show for that postdoc? Out. Doesn't matter that you're actually a good researcher. That person not only killed the career of at least one promising scientist, they actually started human clinical trials based upon a complete fabrication.

    I've also worked for a pharma company on various drug screening assays. I would have so say, they were far more diligent about good experimental design and statistics than academics. They employed full time staff for data analysis and statistics, as well as scientists. The accuracy of the results determined which drugs would proceed further in the pipeline, so they genuinely cared about not picking a dud which would fail down the line. They had the opposite problem. Despite making better and more detailed experimental models for assays (high-content screening), drug performance in cell-based assays was not sufficiently predictive of behaviour in whole animals or human trials. A drug could kill cancer cells like no tomorrow on an assay plate, yet be ineffective (or toxic) in reality.

    So based upon these experiences and observations, I'm happy that there is a high bar to meet. There's too much untrustworthy nonsense out there. The trial has to independently prove the safety and efficacy of the treatment. I'm sure that some of the red tape could be cut; not all of it is strictly necessary, and some of it is to raise the bar to competition, but that would need to be done very carefully to not remove some of the strictly necessary barriers.

    • (Score: 2) by JoeMerchant on Monday May 04 2020, @12:46AM

      by JoeMerchant (3937) on Monday May 04 2020, @12:46AM (#989995)

      they were far more diligent about good experimental design and statistics than academics

      They know the value of C'ing their A's... a lot to lose when you're a Trillion dollar enterprise.

      --
      🌻🌻 [google.com]