Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday May 22 2017, @07:11AM   Printer-friendly
from the prescription-for-a-disaster dept.

Arthur T Knackerbracket has found the following story:

Patients would no longer have to wake up in the middle of the night to take their pills, Purdue told doctors. One OxyContin tablet in the morning and one before bed would provide "smooth and sustained pain control all day and all night."

When Purdue unveiled OxyContin in 1996, it touted 12-hour duration.

On the strength of that promise, OxyContin became America's bestselling painkiller, and Purdue reaped $31 billion in revenue.

But OxyContin's stunning success masked a fundamental problem: The drug wears off hours early in many people, a Los Angeles Times investigation found. OxyContin is a chemical cousin of heroin, and when it doesn't last, patients can experience excruciating symptoms of withdrawal, including an intense craving for the drug.

The problem offers new insight into why so many people have become addicted to OxyContin, one of the most abused pharmaceuticals in U.S. history.

Over the last 20 years, more than 7 million Americans have abused OxyContin, according to the federal government's National Survey on Drug Use and Health. The drug is widely blamed for setting off the nation's prescription opioid epidemic, which has claimed more than 190,000 lives from overdoses involving OxyContin and other painkillers since 1999.

The internal Purdue documents reviewed by The Times come from court cases and government investigations and include many records sealed by the courts. They span three decades, from the conception of OxyContin in the mid-1980s to 2011, and include emails, memos, meeting minutes and sales reports, as well as sworn testimony by executives, sales reps and other employees.

The documents provide a detailed picture of the development and marketing of OxyContin, how Purdue executives responded to complaints that its effects wear off early, and their fears about the financial impact of any departure from 12-hour dosing.

Reporters also examined Food and Drug Administration records, Patent Office files and medical journal articles, and interviewed experts in pain treatment, addiction medicine and pharmacology.

Experts said that when there are gaps in the effect of a narcotic like OxyContin, patients can suffer body aches, nausea, anxiety and other symptoms of withdrawal. When the agony is relieved by the next dose, it creates a cycle of pain and euphoria that fosters addiction, they said.

OxyContin taken at 12-hour intervals could be "the perfect recipe for addiction," said Theodore J. Cicero, a neuropharmacologist at the Washington University School of Medicine in St. Louis and a leading researcher on how opioids affect the brain.

Patients in whom the drug doesn't last 12 hours can suffer both a return of their underlying pain and "the beginning stages of acute withdrawal," Cicero said. "That becomes a very powerful motivator for people to take more drugs."

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by AthanasiusKircher on Monday May 22 2017, @03:04PM (7 children)

    by AthanasiusKircher (5291) on Monday May 22 2017, @03:04PM (#513523) Journal

    Indeed. I'm often shocked at how Mercola's site links are often near the top of any internet search on a medical topic. Mainstream medicine and Big Pharma have had their share of scandals over the years, but Mercola seems to seek out every form of quackery there is and put his personal stamp of approval on it.

    He actively encourages people to ignore a lot of mainstream medical advice. Aside from being an anti-vaxxer (which itself is likely to lead to a lot more serious childhood illnesses, serious complications, and deaths in the coming years), he's an AIDS denialist, he encourages people on prescription medication to stop taking them (even if they have serious conditions), he is anti-sunscreen (well, unless you use his questionable "natural" stuff like "green tea" that he'll sell you) even though skin cancer is becoming a more serious problem... meanwhile hawking his own tanning beds! The guy has even gone so far as to promote his own bizarre BS "thermal" testing as a safer alternative to mammograms for breast cancer screening, or to claim that cancer is merely a "fungus" that can be cured with baking soda. (I wish I were making this stuff up.) The dude is even an eyeglasses denialist [mercola.com]!

    And this is aside from all of his anti-science propaganda and promotion of conspiracy theories. The guy is a public menace. I have absolutely no doubt that thousands of people every year suffer serious ill effects if not life-threatening or fatal ones due to following his advice.

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0, Disagree) by Anonymous Coward on Monday May 22 2017, @03:44PM (6 children)

    by Anonymous Coward on Monday May 22 2017, @03:44PM (#513546)

    I have absolutely no doubt that thousands of people every year suffer serious ill effects if not life-threatening or fatal ones due to following his advice

    Well, I've never read that site, but how does it compare to just medical errors:

    Their analysis, published in the BMJ on Tuesday, shows that “medical errors” in hospitals and other health-care facilities are incredibly common and may now be the third-leading cause of death in the United States — claiming 251,000 lives every year, more than respiratory disease, accidents, stroke and Alzheimer’s.

    https://www.washingtonpost.com/news/to-your-health/wp/2016/05/03/researchers-medical-errors-now-third-leading-cause-of-death-in-united-states/ [washingtonpost.com]

    The thing is that just doing nothing (or placebo) is often the best choice when it comes to medical treatment. Our understanding is extremely rudimentary (can you pick one precise prediction that came true on the topic in the last ten years?), and it is easier to break things you don't understand that fix them. Further there is no current incentive to improve on this since researchers in that area are continuing to get away with doing NHST and p hacking instead of science.

    This guys ideas seem less dangerous than a bunch of deeply misinformed people armed with super-concentrated chemicals who want to inject you and cut you open.

    • (Score: 0) by Anonymous Coward on Monday May 22 2017, @03:57PM

      by Anonymous Coward on Monday May 22 2017, @03:57PM (#513556)

      Same AC. And lets not forget that the number of medical degrees given out is purposefully limited so that almost all doctors are sleep deprived: https://www.theguardian.com/society/2016/dec/01/junior-doctors-sleep-deprivation-poses-threat-to-patients-says-gmc [theguardian.com]

      The entire medical culture is sick as hell and you should be very scared of having any interaction with them. And once again, that isn't to say any of this alternative stuff works, it is just less dangerous. Similar to how people started noticing how strong the placebo effect was during the civil war because the standard treatments like bloodletting were killing patients. And think about the mountain of details they argued about back then too (how much to bleed from where, the mechanism of action, etc). It would have been unbelievable if that was all based on nothing and doing more harm than good wouldn't it?

    • (Score: 2) by AthanasiusKircher on Monday May 22 2017, @05:17PM (4 children)

      by AthanasiusKircher (5291) on Monday May 22 2017, @05:17PM (#513600) Journal

      This study that claims 251,454 patients die each year is here [west-info.eu].

      Before citing it as strong evidence, you should note that that estimate of 250,000 deaths is based on data from three papers that collectively accounted for 35 "preventable" deaths. No, that's not a typo. That's quite an extrapolation.

      Some interesting responses and further commentary here [theguardian.com], here [sciencebasedmedicine.org], here [statnews.com], and here [blogspot.com].

      The gist is that "study" got a lot of press, but it's probably off by at least an order of magnitude or so. (It's also difficult to determine after the fact how many deaths were actually "preventable" given what was known by clinicians at the time.) None of this should excuse medical errors -- and "only" 20,000ish deaths/year is WAY too much.

      This guys ideas seem less dangerous than a bunch of deeply misinformed people armed with super-concentrated chemicals who want to inject you and cut you open.

      First off, we're talking about one guy with a huge amount of influence. Is it possible that there's SOME other medical doctor out there who actually gives out bad advice that likely results in serious side effects if not deaths in a large number of patients? Maybe -- but I doubt he'd stay a doctor very long. A combination of malpractice suits would likely drive him from the profession, if he wasn't fired or had his license revoked first. It seems most serious medical errors that result in deaths occur during hospital care. Mercola isn't dealing with that: he's advising people to avoid ALL scientifically-proven preventative medicine for many illnesses.

      Errors are just that: errors. They are lapses in judgment or whatever. Reasonable physicians with better knowledge and further analysis of the situation can identify what actually went wrong. By the way, you know how such studies KNOW something went wrong? SCIENCE. We look at causality and say, "Huh -- this guy had a tumor, and we didn't cut it out, so he died. Maybe we should cut it out in future patients." Mercola just says, "Oh, it's a fungus! Rub some baking soda on it!" No rigorous analysis. No statistical evidence of effectiveness. Just hokum and quackery.

      Making accidental errors that you later can identify as errors is quite different from deliberately promoting stuff that is KNOWN to be false, stuff that contradicts established science, continuing to promote such stuff after you've been definitively disproven, etc.

      Car analogy: If I sell you a car that had poor maintenance on the brake system, and you have an accident and die, I made an error. Depending on the situation, I may or may not be legally culpable for negligence. If, on the other hand, I sell you a car with the brake system removed and claim "If you just take these vitamin pills, you can stop your car with the power of your thoughts" and you have an accident and die, I should be rightly called out as a quack deliberately peddling unsafe cars and ridiculous advice.

      • (Score: 0) by Anonymous Coward on Monday May 22 2017, @06:28PM (2 children)

        by Anonymous Coward on Monday May 22 2017, @06:28PM (#513647)

        Before citing it as strong evidence, you should note that that estimate of 250,000 deaths is based on data from three papers that collectively accounted for 35 "preventable" deaths. No, that's not a typo. That's quite an extrapolation.

        I went to table 1 and checked the first reference in that table (ref 11). It looks like that one alone dealt with >10,000x more records than you claim:

        Of the total of 323,993 deaths among patients who experienced one or more PSIs from 2000 through 2002, 263,864, or 81%, of these deaths were potentially attributable to the patient safety incident(s)

        http://www.providersedge.com/ehdocs/ehr_articles/Patient_Safety_in_American_Hospitals-2004.pdf [providersedge.com]

        On the other hand I haven't checked this paper in detail, it is quite possible it will end up being normal medical research quality (extremely crappy). But getting such an estimate does not seem like it would be problematic in principle (besides the trouble with defining "error"). So that will be the fault of NIH, CDC, etc for not funding studies to collect this important info.

        • (Score: 2) by AthanasiusKircher on Thursday May 25 2017, @08:44PM (1 child)

          by AthanasiusKircher (5291) on Thursday May 25 2017, @08:44PM (#515697) Journal

          I'm not normally responding to ACs these days, but I need to correct an error here. If you actually read the study in the link you provided (rather than merely its "summary"), you'll find the following statement on page 6:

          We determined that the 16 PSIs we studied may have contributed to 263,864 deaths in the Medicare population from 2000 through 2002. Eighty-one percent of these preventable deaths were potentially attributable to the patient safety incident.

          These "weasel words" are there for very good reasons, despite being juxtaposed with seemingly contradictory rhetoric like "these preventable deaths."

          Those "263,864 deaths" quoted in the meta-study were extrapolated from analysis of "16 PSIs," which stands for "patient safety indicator." In other words, they didn't actually examine any specific cases to determine whether a "preventable death" occurred due to the details of the case. Instead, they extrapolated on the basis of vague issues that potentially indicate a problem with "patient safety." Some of those "indicators" seem clearer than others (see Appendix A for the list). For example, "foreign body left during procedure" sounds like a clear medical error, though again whether it was a primary cause of death was not investigated in any specific case in that study. On the other hand, "Post-operative hemorrhage or hematoma" -- well, lots of people experience bleeding post-op, especially if they don't adhere to doctor's instructions. Trying to extrapolate how many "preventable deaths" occurred based on an "indicator" like that seems problematic, though.

          So, how did they come up with their numbers? Well, if you look at the Appendix F from your link, you'll see they extrapolated based on statistics from this study [jamanetwork.com]. Except that study didn't actually examine mortality or "preventable deaths" by examining individual cases either, but rather used a sort of "case-control" methodology to look at the difference between outcomes with patients who did and did not experience these "PSIs." On that basis, they calculated "excess mortality" likely due to those PSIs.

          That may sound a little better methodologically (and I agree), but then you read their conclusion:"one can infer that the 18 types of medical injuries may add to a total of 2.4 million extra days of hospitalization, $9.3 billion excess charges, and 32 591 attributable deaths in the United States annually."

          So, your linked study took the estimates of "excess mortality" and applied them to a new dataset to extrapolate possible deaths and possible medical errors that may have contributed to these deaths, and then came up with an estimate for the Medicare patients alone that is 2.7 times higher than the estimates for ALL patients in the U.S. in the study I linked (and from which they got their estimates for mortality), even though the study I linked did a much less rigorous analysis.

          Anyhow, I stick to my original statement: only 35 actual cases were studied and determined to be preventable based on individual facts. I'm willing to accept a more rigorous case-control analysis or something as a way to extrapolate a broader estimate, but I don't see evidence that your linked study or the broader metastudy that's being discussed here used such methods. And given that their own source for methodology estimated the annual death rate as nearly an order of magnitude lower, I'd say there are serious red flags here.

          • (Score: 2) by AthanasiusKircher on Thursday May 25 2017, @09:05PM

            by AthanasiusKircher (5291) on Thursday May 25 2017, @09:05PM (#515703) Journal

            even though the study I linked did a much less rigorous analysis.

            Sorry -- meant to say "much MORE rigorous."

            Bottom line is the ~250k/year estimate is based on a metastudy that's based on 3 studies that looked at 35 actual preventable deaths, and based on one other study that made extrapolations based on methodology and extrapolation procedures from yet another study that came up with an estimate of ~32k/year.

      • (Score: 0) by Anonymous Coward on Monday May 22 2017, @08:07PM

        by Anonymous Coward on Monday May 22 2017, @08:07PM (#513717)

        Car analogy: If I sell you a car that had poor maintenance on the brake system, and you have an accident and die, I made an error. Depending on the situation, I may or may not be legally culpable for negligence. If, on the other hand, I sell you a car with the brake system removed and claim "If you just take these vitamin pills, you can stop your car with the power of your thoughts" and you have an accident and die, I should be rightly called out as a quack deliberately peddling unsafe cars and ridiculous advice.

        I'd say the current situation is that the way mainstream medical research is done (and used to inform treatments) is like selling a car while having no idea whether it contains a brake system or not because you don't know what one would look like. However you did check that the car rolls to a stop eventually (null hypothesis of "no stopping" was false), so it probably has brakes.

        I have been there. To the meetings, the journal clubs, etc. It is standard to have no idea what a p-value means yet use them for everything.