Wednesday, October 20, 2021

Overall Mortality and Drug-Related Mortality by Age, 1999-2018

With Covid-19 hitting the world in mid-2020, I was having a lot of thoughts about how to measure public health crises. Covid deaths were numerous enough to show up in the all-cause mortality figures. This basically ruled out the possibility that Covid was far less severe or far less common than the official statistics implied. Cause of death misattributions could conceivably overstate deaths in one category while understating those in another, but they couldn't drive an upward trend in overall mortality. The excess deaths look particularly stark if you break down the analysis by demographic and look at those groups especially hard-hit by the virus. The all-cause mortality figures ruled out a lot of kooky conspiracy theories and narratives that minimized the severity of the virus. 

I thought this fixation on all-cause mortality could be useful in other contexts. Of particular interest to me was drug-related mortality. I believe that there is a huge misattribution problem in drug-related causes of death, particularly with regard to prescription opioids. If someone dies with a syringe in his arm, maybe the cause of death is unambiguous in that case. But millions of people are walking around with blood levels of opioids that would kill a naïve drug user. Surely these people sometimes drop dead of unrelated causes, like a sudden fatal heart arrhythmia, and the medical examiner wrongly marks it as a drug overdose. If drug overdoses (as marked on the death certificate) are a large proportion of these deaths, but the all cause mortality doesn't change much, it raises suspicions. Of course, other causes of death could be falling at the same rate that drug overdoses are rising. It's certainly possible. But it would look really suspicious. 

I downloaded the all-cause mortality by year, age, race, and gender from the CDC website here. I appended total drug deaths based on my decoding of the large cause of death file (which I've reported on for several years, see my work up of the data for 2016, 2017, and 2018) . Since the opioid epidemic is disproportionately a white male phenomenon, I limited the analysis to that demographic. (Males have something like double the drug-related mortality of females, so the effect of an opioid crisis in their all-cause mortality will pop out more if we limit the analysis to them.) Keep that in mind for all charts and numbers below, the dataset is being filtered for white males. Many of my observations stand without doing this filtering; I did the analysis both with and without. I'm sharing the filtered version in this post. 

Another thing to keep in mind is that prescription opioid deaths dominated the trends in drug-related deaths from 1999 to about 2010. After 2010, heroin and synthetic narcotics like fentanyl began to dominate the trends in drug deaths, and prescription opioid deaths tended to flatten out. I will refer to these periods below and try to justify my claim that these are two different epochs. 

First I want to get a feel for the age distribution of drug-related mortality. Below is a density plot, which basically shows the distribution of deaths in each of three years. It looks like the distribution shifted to slightly older between 2000 and 2010, then younger in 2018. The average (not shown) bounces around between 40 and 42.5 over this time period, which doesn't sound like much. But the distributional shift kind of maps onto a story of prescription opioid use rising in the 2000 to 2010 period, being more pronounced in the 40+ demographic (the modal drug poisoning is just shy of age 50 in 2010!). Then the use of recreational heroin and fentanyl rises in the 2010-present period, hitting younger users harder. Modal age of deaths is around 30 in 2018, but do note the fatter tail, extending more into older ages than in 2000 or 2010. The mode can shift a lot without the overall average budging much. Anyway, the story of pills being an older phenomenon and street drugs being a younger phenomenon makes intuitive sense to me and is consistent with what I know generally about drug use patterns.


Next I want to show what drug-related mortality is doing by age group over this time period. Below I am showing drug mortality per 100k for ages 20 through 59. It definitely looks like there is an upward march. Here I am not making a distinction that I usually make, in which I break out suicides and unintentional overdoses. I am simply including all deaths in which a drug was mentioned on the death certificate (so this actually includes some car accidents, drownings, and some other deaths that generally aren't counted in studies of the overdose epidemic, but which can plausibly be blamed on drug use.) Some show a more consistent upward trend over the period, others show some temporary plateaus. Between, say, ages 25 and 50, there is an evident spike in the 2010+ period, a very clear break in the trendline. This is obviously related to the explosion of fentanyl-related poisonings over the period. (The grey box on top of each plot shows the age of the decedent.) 


So what is happening to all-cause mortality over the same period? Here is what those numbers look like.


I found this odd. A major cause of death is rising, and yet it's only evident in the all-cause mortality numbers for, say, ages 25 through 40. The heroin/fenanyl epidemic is evident in the all-cause trends at ages ~25 to 45. Every age in this range is showing an increase at least in the last few years (say, 2015-2018). Some are showing an increase over the 2010 to 2018 period, which matches with the period over which heroin/fentanyl deaths were really exploding. The all-cause mortality patterns basically validate the idea that there's a serious illicit drug problem in the 2010 to present period. 

What about the period from 1999 to 2010? This is a time when overdose deaths were dominated by prescription pills rather than illicit opioids. The only ages for which there is an obvious upward trend over this period (granting some substantial reversals and plateaus) are maybe ages 24 to 32 (not that any of these endpoints are exact). At ages 20 and 21, there is actually a decline in overall mortality. For people in their mid-30s, the trend looks pretty flat. For ages 36 to 49 the trendlines slope downward for this period. This is weird. Something is apparently offsetting the drug-related mortality over this period in some of the hardest-hit demographics.

It's not like the drug-related deaths are just a rounding error, either. Many analyses (in particular I'm thinking of Angus Deaton's recent work) have suggested that drug-related mortality is causing an overall decline in life expectancy, which would imply that they're a substantial fraction of overall deaths. Indeed they are. Let's look at the actual numbers. 


Supposedly, drug-related mortality rises from ~5% of deaths to ~15% of deaths for 20 year olds over the 1999-2010 period. And yet there is no evidence of this in their all-cause mortality figures? For 33 year olds it rises from ~10% to ~20% of overall deaths, and yet the all-cause trend is flat? People in their 40s are mostly seeing improvements in mortality over this period, and yet they're also seeing substantial percentage-point increases here. Either something is offsetting the "opioid epidemic" (at least up until 2010), or the trend in drug-related deaths is spurious (misattributing deaths of other causes to opioids, say). Given the substantial fraction of deaths that are supposedly attributable to drug-related causes, something should be evident in the all-cause mortality for basically all age categories. But that's not what we're seeing.

I only have the foggiest notion of how to do a proper "excess death" calculation.* The exercise is particularly fraught when you have a moving target like this, with very different trends at different ages. I don't even think "excess deaths" is a well defined concept in this kind of statistical environment, where you expect deaths to fluctuate wildly due to unpredictable social trends. So I did something much simpler. I built an "actual versus counterfactual" comparison. The actual lines are the same as the all-cause mortality figures above, the counterfactuals are what we would have seen if everything else had stayed the same. In other words, what does it look like if non-drug deaths were locked in at their 1999 values and the only driver of overall mortality were changes in drug-mortality? Red is the true trendline, blue is the counterfactual that I have just described. 


There are a few lessons here. First of all, even if you're 100% credulous of the overdose numbers collected by the CDC (and sometimes broadcast by the merchants of moral panic in the media), your dire storytelling should be tempered by good news. "Of course, despite this horrible epidemic, of addiction, overall mortality is flat or declining for most demographics." Is it so hard to inform your readers and viewers of the bottom line impact on overall mortality? Second lesson, you shouldn't be 100% credulous of the CDC's numbers. You should concede that there is something to the notion that deaths are being systematically misattributed to drugs. The only demographics for which the opioid epidemic narrative fits for the 1999-2010 period are folks in their mid-20s up to about age 32. For all other ages, it is clear that something other than drug poisonings are driving all-cause mortality trends. A third lesson: disaggregate your data. There's a deeper story than what the overall population statistics are telling you.  

Next we'll take a look at very broad causes of death. Internal causes of death are things like cancer and various kinds of organ failure. "Natural causes." External causes of death are things like automobile accidents, drowning, suicide, homicide, and drug overdoses. Obviously this is a more general category than "drug related mortality," but trends in overdoses should be reflected in the trendlines for external causes. I was looking for evidence of misattribution. If the rise in reported overdose deaths is due to deaths from other causes being falsely attributed to drug poisonings, we would expect to see some cause of death that is falling while drug overdoses are rising. That's to say, we should expect that drug poisonings are "stealing" from some other category. I was looking for a set of trends that were mirror images of each other. Since it is abundantly clear that the increase in overdoses in the 2010 - present period is quite real, I will fixate on the 1999-2010 period when prescription opioids dominate the drug mortality trends. 

[Note that you can query the CDC's Wonder database for more or less granular causes of death. In this case I'm getting stats by the broadest categories: external causes, heart-related, lung-related, digestive system-related, cancer, and infectious disease. Then I'm grouping these into two very broad categories: internal and external causes of death.]

Below I have plotted the change in mortality by age between years 1999 and 2010 for two broad categories, internal and external causes of death. (Above zero means mortality increased over the time period, below zero means a decrease.) I am not seeing the mirroring that I was looking for. So there is no tidy story that, for example, "heart attacks are being systematically mislabeled as drug overdoses." At least not according to this view. 


Maybe a simple comparison between two end-points is glossing over a trend? Maybe we'll see something different if we look at the full range of years? To check for this, I plotted the internal and external mortality for the full 1999-2010 period, for each of several ages. 


Again, I'm not seeing any "mirroring" in which one trend is rising at the same rate as the other is falling. (For ages 45-55, it's true that the external trend is rising while the internal trend is falling, but not with matching slopes.) For ages 25 - 35 it looks like external causes of mortality are rising, which is in line with the opioid epidemic story. But this looks a little bit weird. The external cause trendline is flat for 40-year-olds? For 20-year-olds external causes are actually declining a little, even though (see above) drug-related deaths are apparently increasing? Are other external causes of death falling for them while drug overdoses are rising? 

What I have not done yet is look at a "drug-related" versus "external but non-drug-related" trend comparison. If those trends show the mirroring I was looking for, I think that will be a point scored in favor of Sam Peltzman's concept of risk homeostasis. That is, when people have opportunities to engage in risky behavior, they don't simply pile risk on top of risk on top of risk. They have an overall appetite for total risk. So as they engage in more risky behaviors in one domain, they hit the brakes in others. At least that's the hypothesis, at least on average and at the population level. (Of course I wouldn't claim that every single individual is such a Mr. Spock with perfect rationality and actuarially sound calibration of risks. But it makes sense that drug users might decrease their use of some drugs as they increase use of others, or avoid hazardous environments when they know they are going to be inebriated.) 

Something that struck me here was the degree to which external causes of death dominate at young ages while internal causes of death dominate at older ages. You can see from these charts how external causes dominate up to just after age 30, but internal causes are dominant by age 35. I knew that there was a very sharp gradient in all-cause mortality by age, but I hadn't before appreciated this break-out by cause of death category. 

Whatever you think about the quality of our nation's vital statistics, there is some good news here that doesn't depend on accurate labeling or reporting or cause-of-death attribution. All-cause mortality is falling for most demographics, probably driven by a combination of improvements in medical technology and (more speculatively) changes in lifestyle. Had you realized that mortality improved so much over the past 20 years? I suppose that's a silver lining, that even when something terrible is happening (like a raging heroin/fentanyl epidemic that is unambiguously killing tens of thousands of people), favorable trends can more than offset it. All this is prior to Covid, of course. Including 2020 in this picture would pretty drastically change the rightmost end point. That aside, we've endured about ten or fifteen years of panic porn about the opioid epidemic with almost nobody pausing to point out that overall trends were pretty favorable. 

_________________________

*I'm assuming you compute some kind of Poisson frequency based on the number of deaths, then use that to compute the confidence interval, then look at a different period to see if the number of deaths observed is outside that confidence interval. How many deaths are "excess"? Observed minus the top of the confidence interval? Observed  minus the expected number? Is this "expected deaths", the midpoint of the distribution, even defined if you know that it jack-knifes over a ten year period? 

Saturday, October 9, 2021

"There is no evidence..."

I'm growing tired of arguments and pronouncements that don the mantle of science but then proceed to make the most embarrassingly false claims. I often see it said that there is "no evidence" or "zero evidence" for some claim. This can't be literally true. 

Anything that can potentially shift your priors is evidence, even if it doesn't shift them by much. If I tell you I saw a leprechaun, that's "evidence" that there was a leprechaun, even if it's not very persuasive. There are very strong priors against there being leprechauns, given the laws of physics and biology. Suppose I claim to have seen one, and you've known me for a while. Maybe I've shown tendencies in the past to be overly credulous, or perhaps I'm a known bullshitter. Still, my isolated claim is a type of evidence. If I told you some other claim, say about my kids or about what was on TV the other day, you would believe me. So my word is evidence. Any claim I make has some degree of intrinsic plausibility, even if there is an overwhelming mountain of counter-evidence. My point here is that there are weak forms of evidence that still count. Observational studies aren't as good as randomized controlled trials, but they are still evidence. Logical and theoretical arguments aren't physical proof, but they are still evidence. The insights you gain from introspection aren't physical or objective, but they are still evidence. Without someone at least temporarily believing these weaker forms of evidence, we'd never get to the stage of generating new hypotheses and testing them. We'd never get around to generating the physical, empirical evidence that ultimately turns a hypothesis into a working theory. 

I'm hearing "no evidence" as shorthand for one of the following:

"There are some plausible theoretical arguments for the claim, but no direct factual evidence."

"There are theoretical arguments for the claim, but there isn't yet any direct factual evidence because this specific question hasn't been studied yet."

"There are strong theoretical arguments for the claim, but the empirical evidence is mixed."

"The claim in question has been studied thoroughly, and all of the factual evidence so far points to 'No.'" 

"There is evidence for the claim in question and there is also countervailing evidence against it. All things considered, I have adjudicated against the claim."

These are all fine responses when someone makes an implausible or disputed claim, but let's just be more honest about which thing we're saying. I'm seeing still-open scientific questions getting short-circuited. 

See this example linked to by a recent Astral Codex Ten links roundup:


The American Academy of Pediatrics states that there are "no studies" to support the concern about young children being unable to learn facial cues due to widespread masking. Mason's response is brilliant, basically saying that a hypothetical proposed study to determine the effect would be so obviously unethical that no Institutional Review Board would approve it.  (I don't know Mason's politics, but this is a pretty basic libertarian point that I have often made. Policy intervention, which we'd all agree would be unethical if we did them as experiments, are done all the fucking time, on the entire population, without bothering to gather information to determine safety or efficacy. The mandatory face masking thing is one such example.) The tweet by the AAP is extremely dishonest if it's implying that this particular question has been studied and evidence for it is lacking. Suppose the tweet instead said, "This question hasn't been studied, but we don't think it's plausible for the following reasons..." Maybe that wouldn't have been as punchy, but it would be far less misleading and much fairer to people concerned about this issue. 

Pretend for a moment that you're a member of an IRB and someone proposes this intervention to study the effects of hiding facial cues from young children for extended periods of time. Really imagine yourself in this person's shoes. Are you filled with unease that you might be harming some of the children in the proposed study? Perhaps not mere unease, but actual horror? Imagine green-lighting this and seeing the final paper. Picturing, say, Figure 3 on page 22 of the published paper showing that the treatment group acquired language less efficiently, would you feel bad that you allowed harm to come to some of these children? Would you feel dread that these effects might be permanent? Introspection is also a kind of evidence. Powerful evidence at that, and too often dismissed. There is some intrinsic plausibility to the notion that hiding facial cues from young children could harm their development, whether or not some particular question has actually been studied. I'm not saying that anyone who prefers to wear a mask should stop because of this. (Imagine yourself being on the IRB for another hypothetical study where people were discouraged from wearing masks during a pandemic. Isn't the world just full of trade-offs and uncertainty?) But we shouldn't be so dismissive when someone points to a plausible cost of a new behavior. Certainly we shouldn't dismiss it in the language of scientific certainty. This should get at least some decision-weight when considering policies (public and private) such as the masking of children or advisements to mask at home around young children. 

Here is a Cafe Hayek post from a few years ago, in which Don Boudreaux takes Paul Krugman to task for making a "no evidence" argument. Krugman absurdly claims:

There’s just no evidence that raising the minimum wage costs jobs, at least when the starting point is as low as it is in modern America.

This is nonsense. There are many studies that find substantial disemployment effects due to minimum wages. (The very best evidence for a strong disemployment effect comes out of the Seattle studies by Jardim et. al., though those weren't out at the time Krugman wrote his piece.) What Krugman is actually saying is that he's done the hard work of weighing the evidence for us and reached a conclusion based on the preponderance of that evidence. Even assuming Krugman had done this literature review (by no means a certainty), he would not then be allowed to say that the contrary evidence doesn't exist. Just that he personally finds it unconvincing, and he should say why.