Arthur T Knackerbracket has found the following story:
Patients would no longer have to wake up in the middle of the night to take their pills, Purdue told doctors. One OxyContin tablet in the morning and one before bed would provide "smooth and sustained pain control all day and all night."
When Purdue unveiled OxyContin in 1996, it touted 12-hour duration.
On the strength of that promise, OxyContin became America's bestselling painkiller, and Purdue reaped $31 billion in revenue.
But OxyContin's stunning success masked a fundamental problem: The drug wears off hours early in many people, a Los Angeles Times investigation found. OxyContin is a chemical cousin of heroin, and when it doesn't last, patients can experience excruciating symptoms of withdrawal, including an intense craving for the drug.
The problem offers new insight into why so many people have become addicted to OxyContin, one of the most abused pharmaceuticals in U.S. history.
Over the last 20 years, more than 7 million Americans have abused OxyContin, according to the federal government's National Survey on Drug Use and Health. The drug is widely blamed for setting off the nation's prescription opioid epidemic, which has claimed more than 190,000 lives from overdoses involving OxyContin and other painkillers since 1999.
The internal Purdue documents reviewed by The Times come from court cases and government investigations and include many records sealed by the courts. They span three decades, from the conception of OxyContin in the mid-1980s to 2011, and include emails, memos, meeting minutes and sales reports, as well as sworn testimony by executives, sales reps and other employees.
The documents provide a detailed picture of the development and marketing of OxyContin, how Purdue executives responded to complaints that its effects wear off early, and their fears about the financial impact of any departure from 12-hour dosing.
Reporters also examined Food and Drug Administration records, Patent Office files and medical journal articles, and interviewed experts in pain treatment, addiction medicine and pharmacology.
Experts said that when there are gaps in the effect of a narcotic like OxyContin, patients can suffer body aches, nausea, anxiety and other symptoms of withdrawal. When the agony is relieved by the next dose, it creates a cycle of pain and euphoria that fosters addiction, they said.
OxyContin taken at 12-hour intervals could be "the perfect recipe for addiction," said Theodore J. Cicero, a neuropharmacologist at the Washington University School of Medicine in St. Louis and a leading researcher on how opioids affect the brain.
Patients in whom the drug doesn't last 12 hours can suffer both a return of their underlying pain and "the beginning stages of acute withdrawal," Cicero said. "That becomes a very powerful motivator for people to take more drugs."
-- submitted from IRC
(Score: 2) by AthanasiusKircher on Monday May 22 2017, @05:17PM (4 children)
This study that claims 251,454 patients die each year is here [west-info.eu].
Before citing it as strong evidence, you should note that that estimate of 250,000 deaths is based on data from three papers that collectively accounted for 35 "preventable" deaths. No, that's not a typo. That's quite an extrapolation.
Some interesting responses and further commentary here [theguardian.com], here [sciencebasedmedicine.org], here [statnews.com], and here [blogspot.com].
The gist is that "study" got a lot of press, but it's probably off by at least an order of magnitude or so. (It's also difficult to determine after the fact how many deaths were actually "preventable" given what was known by clinicians at the time.) None of this should excuse medical errors -- and "only" 20,000ish deaths/year is WAY too much.
First off, we're talking about one guy with a huge amount of influence. Is it possible that there's SOME other medical doctor out there who actually gives out bad advice that likely results in serious side effects if not deaths in a large number of patients? Maybe -- but I doubt he'd stay a doctor very long. A combination of malpractice suits would likely drive him from the profession, if he wasn't fired or had his license revoked first. It seems most serious medical errors that result in deaths occur during hospital care. Mercola isn't dealing with that: he's advising people to avoid ALL scientifically-proven preventative medicine for many illnesses.
Errors are just that: errors. They are lapses in judgment or whatever. Reasonable physicians with better knowledge and further analysis of the situation can identify what actually went wrong. By the way, you know how such studies KNOW something went wrong? SCIENCE. We look at causality and say, "Huh -- this guy had a tumor, and we didn't cut it out, so he died. Maybe we should cut it out in future patients." Mercola just says, "Oh, it's a fungus! Rub some baking soda on it!" No rigorous analysis. No statistical evidence of effectiveness. Just hokum and quackery.
Making accidental errors that you later can identify as errors is quite different from deliberately promoting stuff that is KNOWN to be false, stuff that contradicts established science, continuing to promote such stuff after you've been definitively disproven, etc.
Car analogy: If I sell you a car that had poor maintenance on the brake system, and you have an accident and die, I made an error. Depending on the situation, I may or may not be legally culpable for negligence. If, on the other hand, I sell you a car with the brake system removed and claim "If you just take these vitamin pills, you can stop your car with the power of your thoughts" and you have an accident and die, I should be rightly called out as a quack deliberately peddling unsafe cars and ridiculous advice.
(Score: 0) by Anonymous Coward on Monday May 22 2017, @06:28PM (2 children)
I went to table 1 and checked the first reference in that table (ref 11). It looks like that one alone dealt with >10,000x more records than you claim:
On the other hand I haven't checked this paper in detail, it is quite possible it will end up being normal medical research quality (extremely crappy). But getting such an estimate does not seem like it would be problematic in principle (besides the trouble with defining "error"). So that will be the fault of NIH, CDC, etc for not funding studies to collect this important info.
(Score: 2) by AthanasiusKircher on Thursday May 25 2017, @08:44PM (1 child)
I'm not normally responding to ACs these days, but I need to correct an error here. If you actually read the study in the link you provided (rather than merely its "summary"), you'll find the following statement on page 6:
These "weasel words" are there for very good reasons, despite being juxtaposed with seemingly contradictory rhetoric like "these preventable deaths."
Those "263,864 deaths" quoted in the meta-study were extrapolated from analysis of "16 PSIs," which stands for "patient safety indicator." In other words, they didn't actually examine any specific cases to determine whether a "preventable death" occurred due to the details of the case. Instead, they extrapolated on the basis of vague issues that potentially indicate a problem with "patient safety." Some of those "indicators" seem clearer than others (see Appendix A for the list). For example, "foreign body left during procedure" sounds like a clear medical error, though again whether it was a primary cause of death was not investigated in any specific case in that study. On the other hand, "Post-operative hemorrhage or hematoma" -- well, lots of people experience bleeding post-op, especially if they don't adhere to doctor's instructions. Trying to extrapolate how many "preventable deaths" occurred based on an "indicator" like that seems problematic, though.
So, how did they come up with their numbers? Well, if you look at the Appendix F from your link, you'll see they extrapolated based on statistics from this study [jamanetwork.com]. Except that study didn't actually examine mortality or "preventable deaths" by examining individual cases either, but rather used a sort of "case-control" methodology to look at the difference between outcomes with patients who did and did not experience these "PSIs." On that basis, they calculated "excess mortality" likely due to those PSIs.
That may sound a little better methodologically (and I agree), but then you read their conclusion:"one can infer that the 18 types of medical injuries may add to a total of 2.4 million extra days of hospitalization, $9.3 billion excess charges, and 32 591 attributable deaths in the United States annually."
So, your linked study took the estimates of "excess mortality" and applied them to a new dataset to extrapolate possible deaths and possible medical errors that may have contributed to these deaths, and then came up with an estimate for the Medicare patients alone that is 2.7 times higher than the estimates for ALL patients in the U.S. in the study I linked (and from which they got their estimates for mortality), even though the study I linked did a much less rigorous analysis.
Anyhow, I stick to my original statement: only 35 actual cases were studied and determined to be preventable based on individual facts. I'm willing to accept a more rigorous case-control analysis or something as a way to extrapolate a broader estimate, but I don't see evidence that your linked study or the broader metastudy that's being discussed here used such methods. And given that their own source for methodology estimated the annual death rate as nearly an order of magnitude lower, I'd say there are serious red flags here.
(Score: 2) by AthanasiusKircher on Thursday May 25 2017, @09:05PM
Sorry -- meant to say "much MORE rigorous."
Bottom line is the ~250k/year estimate is based on a metastudy that's based on 3 studies that looked at 35 actual preventable deaths, and based on one other study that made extrapolations based on methodology and extrapolation procedures from yet another study that came up with an estimate of ~32k/year.
(Score: 0) by Anonymous Coward on Monday May 22 2017, @08:07PM
I'd say the current situation is that the way mainstream medical research is done (and used to inform treatments) is like selling a car while having no idea whether it contains a brake system or not because you don't know what one would look like. However you did check that the car rolls to a stop eventually (null hypothesis of "no stopping" was false), so it probably has brakes.
I have been there. To the meetings, the journal clubs, etc. It is standard to have no idea what a p-value means yet use them for everything.