Monday, December 30, 2019

Interesting Study On the Origins of the Opioid Crisis

 The study is called Origins of the Opioid Crisis and Enduring Impacts. Here is a link to the working paper. The paper attempts to put the blame on Purdue's aggressive marketing of the drug. It references internal Purdue documents that describe its marketing strategy. There are five so-called "triplicate" states, in which multiple copies of a form had to be filled out whenever a doctor prescribed oxycodone, the active ingredient of OxyContin. (I believe any Schedule II narcotics required the form, not just Oxy). States with these laws included Illinois, California, New York, Texas, and Idaho. The paper shows that these states did indeed see less adoption of OxyContin and subsequently less overdose deaths as compared to other states. Here is the abstract:
Overdose deaths involving opioids have increased dramatically since the mid-1990s, leading to the worst drug overdose epidemic in U.S. history, but there is limited empirical evidence on the initial causes. In this paper, we examine the role of the 1996 introduction and marketing of OxyContin as a potential leading cause of the opioid crisis. We leverage cross-state variation in exposure to OxyContin’s introduction due to a state policy that substantially limited OxyContin’s early entry and marketing in select states. Recently-unsealed court documents involving Purdue Pharma show that state-based triplicate prescription programs posed a major obstacle to sales of OxyContin and suggest that less marketing was targeted to states with these programs. We find that OxyContin distribution was about 50% lower in “triplicate states” in the years after the launch. While triplicate states had higher rates of overdose deaths prior to 1996, this relationship flipped shortly after the launch and triplicate states saw substantially slower growth in overdose deaths, continuing even twenty years after OxyContin's introduction. Our results show that the introduction and marketing of OxyContin explain a substantial share of overdose deaths over the last two decades.
If you're curious about the details I recommend reading the paper, which is quite readable. Even if you're not familiar with the time series techniques it's using (I'm certainly not), it's easy enough to understand what they're doing and follow their graphs and tables. That being enough of a summary, here's why I think their basic story is wrong.

Opioid Abuse Didn't Increase Over Time

I've written about this before. The data simply don't show any increase in opioid abuse over the period of interest, when the number of prescription and tonnage of opioids prescribed increased dramatically. Here is Figure 5 from the paper:

It looks like the OxyContin abuse rate increases from 0.6 to 0.8% of the population in the non-triplicate states from 2004 to 2010 (before again coming back down to ~0.6). Okay. But the abuse rate for other pain relievers decreased from about 4.4 to 3.9% of the population in the same period. (2010 is the year that abuse-resistant OxyContin came on the market; it was harder to crush and thus override the "time release" nature of the pill. I believe this was also the year that people started noticing the increase in overdose deaths and "do something" policymaking started to push back on opioid prescribing practices.) In other words, the total percent of the population abusing opioids decreased over the period of interest. At best, you could call it a 33% increase in OxyContin abuse specifically for the non-triplicate states (that's 0.2 / 0.6). The triplicate states also increased by about 33%, though, going from 0.3% to 0.4%. Also, abuse rates for non-OxyContin opioid are higher in 2010 to 2012 for the triplicate states, who were supposedly spared the full brunt of the opioid epidemic? Adding the numbers together for 2012, it looks like total opioid abuse is about a 4.5% for triplicate states (I'm crudely eyeballing a 0.4% off the left graph and a 4.1% off the right) and 4.5% for non-triplicate states (0.6% plus 3.9%). It seems like an important causal link in their story is broken. 

This is worth pondering, because later in the paper they attempt to blame total drug overdose deaths on the triplicate/non-triplicate difference. They even acknowledge the 2010 introduction of abuse-resistant OxyContin and the subsequent increase in heroin and fentanyl deaths. Here is Figure 6:

You might think that "opioid overdose deaths" implies prescription opioid overdoses, because the paper is ostensibly about the role Purdue Pharma played in encouraging doctors to overprescribe. But their footnote 22 on page 14 implies that they are counting all opioids, including heroin and synthetic narcotics. (It's counting ICD-10 codes T40.0 - T40.4 and T40.6. T40.1 is heroin, T40.2 is "other opiods", the category that includes OxyContin, T40.3 is methadone, and T40.4 is "other synthetic narcotics". Deaths coded with this T40.4 probably mostly involved synthetic subscription opioids prior to 2010 or so. But these deaths started to spike around 2013 when heroin started to be laced with fentanyl at increasing rates, and these deaths would generally be coded T40.4.) In fairness, they do the analysis separately for T40.1, T40.2, and T40.4 in Figure A6. In my opinion it is inappropriate to lead with the analysis on all drug overdoses or even all opioid overdoses if the claim is that OxyContin specifically is the culprit. They can and should do the analysis using T40.2 deaths excluding T40.1 and T40.4. In other words, what does there analysis yield when looking at deaths involving prescription opioids but not involving heroin or illicit fentanyl? Take a look at Figure A6. They show that heroin and synthetic opioid (mostly fentanyl) deaths are higher in non-triplicate states, but the difference is not statistically significant. (For some reason, the confidence intervals are very wide.) If fentanyl and heroin overdoses can't be rightly blamed on Purdue's marketing, it seems they should exclude deaths involving these drugs from the analysis. And in reporting excess mortality rates in non-triplicate states, they should only be reporting "other opioid" (T40.2) mortality, not total drug-related mortality. From the conclusion:
Our estimates (using Table 3, Column 3) show that nontriplicate states would have experienced 4.49 fewer drug overdose deaths per 100,000 on average from 1996-2017 if they had been triplicate states and 3.04 fewer opioid overdose deaths per 100,000.
This is apparently on the basis of total drug overdoses (the 4.49 figure) and total opioid overdoses (the 3.04 figure). Given Figure A6, it seems inappropriate to use these totals. The fixation should be on T40.2 mortality. If they are going to report total drug or total opioid mortality, they should note that they are speculating beyond what their analysis shows. They are in effect blaming heroin and fentanyl overdose death rates on Purdue's marketing, even though their analysis shows the difference in these deaths (between triplicate and non-triplicate states) to be statistically insignificant. Here is Figure A6:


Back to the abuse rates. I think there is a contradiction here. The standard narrative, which this paper is implicitly endorsing, is that OxyContin prescriptions stoked the appetite for other opioids, created a new population of addicts, eventually leading to the increasing rates of heroin and fentanyl overdoses. But take another look at Figure 5. Why didn't abuse rates for non-OxyContin opioid increase? Why does this general appetite for opioids fail to show up in the abuse rates? I really wish that people who comment on the opioid crisis would take this more seriously, because it is a major flaw in their story. My best literal reading of the data is that opioid abuse flattened out by 2000 or so, even though prescriptions continued to skyrocket and drug poisoning deaths continued to shoot up over the 2000 to present period. (It is not clear what they did prior to 2000. Presumably they rose a little, but that's far from obvious.) Did the population of illicit opioid users saturate by 2000? Did Purdue's marketing, which started in 1996, only take four short years to reach this peak? Was the continuing upward trend in deaths a result of more intense use by this (supposedly new) class of drug users? I'm not picking on the authors of this paper here, but people need to be more specific with their timelines.

Including Suicides In the Analysis

Back to footnote 22. They are looking at deaths with ultimate cause of death codes X40-X44 (accidental drug poisonings), Y60-Y64 (suicides involving drugs), X85 (murder involving drugs), and Y10-Y14 (drug poisonings of undetermined intent). Suicides are a relatively small proportion of total drug deaths. But why include them at all? Are the authors implying that deaths involving suicide by opioid wouldn't have happened but for the increase in opioid prescribing? They should redo their analysis just on the accidental overdoses, X40-X44, because these are the only deaths that can be properly considered part of the opioid crisis in the sense that they wouldn't have happened anyway. This is a little odd, because they even have a section on "deaths of despair", Section 5.4.3. They analyze suicides and alcohol-related liver disease and find that triplicate and non-triplicate states don't have different trends in these mortality rates. I suspect redoing their analysis on accidental drug poisonings would make their triplicate vs. non-triplicate differences larger; this one change might strengthen their conclusion. 

Major Metropolitan Areas

Take another look at the list of "triplicate" states. New York, California, Illinois, and Texas include the four largest American cities: New York, Los Angeles, Chicago, and Houston. Population estimates of the major metropolitan areas for these cities implies an enormous share of the population concentrated around these few cities.  It's possible that the differences between triplicate and non-triplicate states are driven by a few major metropolitan areas. It's not to hard to imagine that four or five major cities might just be idiosyncratically different from the rest of the nation. It might be interesting for the authors to redo their analysis on, say Chicagoland vs. southern and central Illinois, or New York City versus a rural, mountainous part of New York state. (New York is a lot of unpopulated mountains and forests with a few big cities. Beautiful to drive through, by the way. Lots and lots of nothing until you reach a big city.) Notably, Idaho, the only state on the list that doesn't have a mega-metropolis, is something of an outlier in the triplicate group. See Figure A3 from the paper.

(Be careful with the metropolitan area link above; some of these span state lines. The Chicago metro area includes part of Indiana and Wisconsin, for example, and the New York metro area includes parts of New Jersey and Pennsylvania, if I'm reading it right. It might be interesting to see if the parts of the metro areas within the triplicate states are different from the parts outside it.)

The paper does try to control for this problem in a couple of ways. See section 5.4.1. They redo their analysis comparing triplicate states to the non-triplicate states with the largest populations and get similar results. They also do their analysis for urban vs. non-urban counties and get similar results. None of this rules out the possibility that the four or five largest cities are just idiosyncratically different from the rest of the nation in ways that have nothing to do with prescription monitoring laws, and that this difference is driving the results. 

Is Purdue's Marketing To Blame or Are Prescription Monitoring Laws To Blame?

There is something implausible about the story that Purdue's marketing is to blame. Purdue apparently decided not to market as aggressively in the triplicate states because it thought that doctors in those states would be less likely to prescribe OxyContin. If Purdue was right about this, then maybe it's actually the presence of the prescription monitoring law and not Purdue's marketing that caused the difference in OxyContin prescribing and overdose deaths.

The paper's story is that Purdue's marketing is to blame. This conclusion relies heavily on a Purdue internal memo describing its marketing strategy. See Figure A1, an image of the memo suggesting they avoid triplicate states. But they don't actually have any data on what Purdue spent on marketing. From the paper:

The statements made in these internal documents suggest that Purdue Pharma viewed triplicate programs as a substantial barrier to OxyContin prescribing and would initially target less marketing to triplicate states because of the lower expected returns. While we do not have data that breaks down Purdue Pharma’s initial marketing spending by state to confirm this directly, we will show that the triplicate states had among the lowest OxyContin adoption rates in the country.
Emphasis mine. 

The paper explicitly considers the possibility that it's the law itself and not Purdue's marketing strategy that caused the difference between triplicate and non-triplicate states. See Section 5.4.2. It tests to see if other prescription monitoring programs provide some level of protection against the opioid crisis. There were many electronic prescription drug monitoring programs, PDMPs, in various states. These should have affected prescribing behavior in a similar way to what triplicate laws, but the paper found no such effect. Their story is that triplicate laws required the actual filling out and storage of a physical paper form by the doctor, a hassle which made the cost more burdensome and the potential scrutiny more salient in the minds of the doctors. The paper also discusses two former-triplicate states that had repealed their triplicate laws in 1994 (prior to the 1996 introduction of OxyContin): Indiana and Michigan. These states should have had a similar prescribing culture to the other triplicate states given the recency of the law change, but the paper found no apparent effect of this prescribing culture on subsequent mortality. From the paper: 
[W]e compare the five triplicate states to the two former triplicate states that had discontinued their programs prior to 1996. In both tests, we find that the five triplicate states have uniquely low exposure to OxyContin and drug overdose rate growth even when compared to states with more comparable prescribing cultures. This evidence supports the role of Purdue Pharma’s marketing rather than cultural factors and entrenched prescribing habits in explaining OxyContin exposure and mortality patterns.
Emphasis mine. This is really quite stunning. They are making Purdue's marketing uniquely responsible for the observed differences between triplicate and non-triplicate states, independent of the laws that Purdue was actually concerned about. Is the take-away here that Purdue might as well have marketed in Texas, Illinois, California, New York, and Idaho? Their marketing instincts were wrong? Their marketing was so powerful and persuasive that they would have successfully convinced doctors to prescribe in those states, too, despite triplicate laws? If I'm taking their results seriously, it seems that's what I'd have to conclude.

Consider some counterfactuals. What if all states had had triplicate laws in 1996? Would Purdue have just marketed everywhere? It seems implausible that they would have simply declined to market anywhere, or marketed less aggressively everywhere, unless the triplicate laws themselves were driving prescribing behavior. Suppose half of all states had had triplicate laws. Does that mean Purdue would have marketed less aggressively in half of the states? Or would they have researched the impact of their marketing more thoroughly and concluded (as the paper apparently does) that marketing trumps the effects of a prescription monitoring law? It's difficult to come up with a reasonable policy implication, even one that we could hypothetically enact in 1996 to prevent the expansion of OxyContin.

Is the conclusion that pharmaceutical companies shouldn't be allowed to market their drugs at all? Keep in mind that "marketing" is communication between doctors and pharmaceutical companies, usually through pharmaceutical reps who may or may not have a scientific background. It seems like this communication has to be allowed to take place through some channel or another, so I don't see any reasonable way to "ban marketing" by pharmaceutical companies. Doctors are generally more scientifically sophisticated than the reps, and they know these people are trying to sell something. They know that the facts they're being presented with are a biased sample of all facts available, and they are capable of checking their veracity. It's harder than you might think to fool people. At any rate, Purdue was correctly informing doctors that opioids are less addictive than everyone assumed. I'd like to see Purdue's critics be more precise about what exactly their deception was. Did they say the risk of addiction was 1% but really it's 2%? Did Purdue claim a rate of addiction that seemed correct by the evidence available at the time but turned out subsequently to be higher after two decades of expanded opioid access? I'd like to see the critics acknowledge that addiction rates were and are quite low by any standard. 

Market Share

It's not discussed in the paper, but it's notable that Purdue's share of the opioid market was actually pretty small. Here's an image lifted from an FT piece titled "Purdue Pharma's One-Two Punch." 


Interestingly, that article is something of a hit piece on Purdue, "exposing" that Rhodes Pharma is a subsidiary owned by the Sackler family. But it's obvious from the chart that even Rhodes plus Purdue's share wouldn't even make the Sacklers the largest single entity. Eye-balling the chart, it looks like they have 6% or 7% of the market. (It's interesting when an author gives you enough information to conclude that their story is fundamentally wrong.) 

In other words, Purdue's marketing was so successful that it created a market several times larger than their actual market share! This is quite surprising to say the least. A result can be surprising while still being true, but it's worth taking a moment to answer some obvious questions. Why couldn't Purdue capitalize more effectively on the market it created? I realize that generics eventually came on the market, and generics tend to be cheaper and thus more popular. Still, it's shocking that they would end up with only about 7% of the market if their marketing was so influential. Do we really suppose that these other companies are all just copy-cats following Purdue's lead? That nobody else would have hit upon the idea of a time-release opioid pill during an era when doctors' attitudes about opioids and pain management were shifting? Does it make any sense to hold Purdue uniquely responsible? Does it make sense to say that, in a but-for sense, many of these overdose deaths wouldn't have happened if not for Purdue's marketing campaign? A lot of implausible claims are leaning on a few sentences lifted from some internal documents from Purdue.

This was an interesting paper and it presents some interesting new facts, but I don't think its basic story is right. I think the most damning point is the flat opioid abuse rates, and this alone ruins their story. But there were some other (admittedly subtle) problems with their analysis and some things that just didn't make any sense. 

No comments:

Post a Comment