Arthur T Knackerbracket has found the following story:
Patients would no longer have to wake up in the middle of the night to take their pills, Purdue told doctors. One OxyContin tablet in the morning and one before bed would provide "smooth and sustained pain control all day and all night."
When Purdue unveiled OxyContin in 1996, it touted 12-hour duration.
On the strength of that promise, OxyContin became America's bestselling painkiller, and Purdue reaped $31 billion in revenue.
But OxyContin's stunning success masked a fundamental problem: The drug wears off hours early in many people, a Los Angeles Times investigation found. OxyContin is a chemical cousin of heroin, and when it doesn't last, patients can experience excruciating symptoms of withdrawal, including an intense craving for the drug.
The problem offers new insight into why so many people have become addicted to OxyContin, one of the most abused pharmaceuticals in U.S. history.
Over the last 20 years, more than 7 million Americans have abused OxyContin, according to the federal government's National Survey on Drug Use and Health. The drug is widely blamed for setting off the nation's prescription opioid epidemic, which has claimed more than 190,000 lives from overdoses involving OxyContin and other painkillers since 1999.
The internal Purdue documents reviewed by The Times come from court cases and government investigations and include many records sealed by the courts. They span three decades, from the conception of OxyContin in the mid-1980s to 2011, and include emails, memos, meeting minutes and sales reports, as well as sworn testimony by executives, sales reps and other employees.
The documents provide a detailed picture of the development and marketing of OxyContin, how Purdue executives responded to complaints that its effects wear off early, and their fears about the financial impact of any departure from 12-hour dosing.
Reporters also examined Food and Drug Administration records, Patent Office files and medical journal articles, and interviewed experts in pain treatment, addiction medicine and pharmacology.
Experts said that when there are gaps in the effect of a narcotic like OxyContin, patients can suffer body aches, nausea, anxiety and other symptoms of withdrawal. When the agony is relieved by the next dose, it creates a cycle of pain and euphoria that fosters addiction, they said.
OxyContin taken at 12-hour intervals could be "the perfect recipe for addiction," said Theodore J. Cicero, a neuropharmacologist at the Washington University School of Medicine in St. Louis and a leading researcher on how opioids affect the brain.
Patients in whom the drug doesn't last 12 hours can suffer both a return of their underlying pain and "the beginning stages of acute withdrawal," Cicero said. "That becomes a very powerful motivator for people to take more drugs."
-- submitted from IRC
(Score: 0) by Anonymous Coward on Monday May 22 2017, @06:28PM (2 children)
I went to table 1 and checked the first reference in that table (ref 11). It looks like that one alone dealt with >10,000x more records than you claim:
On the other hand I haven't checked this paper in detail, it is quite possible it will end up being normal medical research quality (extremely crappy). But getting such an estimate does not seem like it would be problematic in principle (besides the trouble with defining "error"). So that will be the fault of NIH, CDC, etc for not funding studies to collect this important info.
(Score: 2) by AthanasiusKircher on Thursday May 25 2017, @08:44PM (1 child)
I'm not normally responding to ACs these days, but I need to correct an error here. If you actually read the study in the link you provided (rather than merely its "summary"), you'll find the following statement on page 6:
These "weasel words" are there for very good reasons, despite being juxtaposed with seemingly contradictory rhetoric like "these preventable deaths."
Those "263,864 deaths" quoted in the meta-study were extrapolated from analysis of "16 PSIs," which stands for "patient safety indicator." In other words, they didn't actually examine any specific cases to determine whether a "preventable death" occurred due to the details of the case. Instead, they extrapolated on the basis of vague issues that potentially indicate a problem with "patient safety." Some of those "indicators" seem clearer than others (see Appendix A for the list). For example, "foreign body left during procedure" sounds like a clear medical error, though again whether it was a primary cause of death was not investigated in any specific case in that study. On the other hand, "Post-operative hemorrhage or hematoma" -- well, lots of people experience bleeding post-op, especially if they don't adhere to doctor's instructions. Trying to extrapolate how many "preventable deaths" occurred based on an "indicator" like that seems problematic, though.
So, how did they come up with their numbers? Well, if you look at the Appendix F from your link, you'll see they extrapolated based on statistics from this study [jamanetwork.com]. Except that study didn't actually examine mortality or "preventable deaths" by examining individual cases either, but rather used a sort of "case-control" methodology to look at the difference between outcomes with patients who did and did not experience these "PSIs." On that basis, they calculated "excess mortality" likely due to those PSIs.
That may sound a little better methodologically (and I agree), but then you read their conclusion:"one can infer that the 18 types of medical injuries may add to a total of 2.4 million extra days of hospitalization, $9.3 billion excess charges, and 32 591 attributable deaths in the United States annually."
So, your linked study took the estimates of "excess mortality" and applied them to a new dataset to extrapolate possible deaths and possible medical errors that may have contributed to these deaths, and then came up with an estimate for the Medicare patients alone that is 2.7 times higher than the estimates for ALL patients in the U.S. in the study I linked (and from which they got their estimates for mortality), even though the study I linked did a much less rigorous analysis.
Anyhow, I stick to my original statement: only 35 actual cases were studied and determined to be preventable based on individual facts. I'm willing to accept a more rigorous case-control analysis or something as a way to extrapolate a broader estimate, but I don't see evidence that your linked study or the broader metastudy that's being discussed here used such methods. And given that their own source for methodology estimated the annual death rate as nearly an order of magnitude lower, I'd say there are serious red flags here.
(Score: 2) by AthanasiusKircher on Thursday May 25 2017, @09:05PM
Sorry -- meant to say "much MORE rigorous."
Bottom line is the ~250k/year estimate is based on a metastudy that's based on 3 studies that looked at 35 actual preventable deaths, and based on one other study that made extrapolations based on methodology and extrapolation procedures from yet another study that came up with an estimate of ~32k/year.