Sunday, October 15, 2017

The Regulatory State in Practice: The Insurance Industry

People would have far less faith in the regulatory state if they saw how it works day-to-day. In this post I'll share some of my experiences with the regime I am familiar with: personal lines insurance regulation.

Sometimes I'll give my standard libertarian argument for limited government, and somebody will make a knee-jerk, unserious comment about how "Of course, we need some regulation. Otherwise unfettered greed will rule." I don't think so. Whether regulation in practice fetters greed or exacerbates it is really an empirical question. It depends on how good your institutions are, how observant and diligent the voting public is about disciplining the regulatory state, whether it's possible to align the incentives of the regulators with the interests of the public, the relative costs of free versus regulated markets, and lots of other things. I think in almost all cases the best regulation is market discipline without any government augmentation. But in this post I want to narrowly focus on the regulation of personal lines insurance and suggest that maybe some of these lessons generalize.

I am an actuary. Part of my job is to defend my employer's rate filings to regulators, who are always looking for reasons to reject them. First, a little bit about how this works. Personal lines insurance (home/renters and auto policies) is regulated at the state level by each of the 50 states, rather than being regulated at the federal level. Each state has a Department of Insurance, or "DOI". (A mean and immature joke is to pronounce that acronym out loud.) Each insurance company has a rate structure that is explicitly written down such that any two people who are identical on paper get exactly the same price. Prices can vary by rating territory (usually groupings of zip codes and/or counties), age, gender, marital status, credit history (surprisingly predictive of auto and home-related accidents!), and prior claim history. But the insurer has to specify exactly how this works in a rate filing, and has to use exactly those rates until it makes another filing amending that structure. Typically it's something like: $500 base rate for the rating territory you live in, times a factor of 2.0 for your age, gender, and marital status, times a factor of 0.5 for your good credit history, times 2.0 for having multiple prior accidents, so your rate is $500 * 2.0 * 0.5 * 2.0 = $1,000. (This is just an example; it would be an extremely simple rating structure that no insurer could actually get away with in today's marketplace.) I can't just say to this customer, "Been shopping around, eh? Can't find anyone else who will write you a policy at under $2,000, huh? That'll be...$2,000!" I have to charge this customer exactly what my rating algorithm calculates or I am in violation of state law. Any insurer found deviating from their filed rates would be severely fined. There might be some ambiguities about what rate to charge. Maybe the Postal Service redefines zip-codes mid year, and the customer's zip code doesn't map to any rating territory, so I have to place them in the most reasonable one. Or maybe their marital or credit status changes and my rate plan failed to specify how quickly I will reclassify them, such that a divorced person gets the "married" rate or a person with improving credit is temporarily being dinged for their poor past credit (or someone with deteriorating credit is temporarily benefiting from their good past credit, which is far more likely in my experience). But these ambiguities are a small part of the game. For the most part, the rate is spelled out clearly and unambiguously.

Typically, a company does a rate filing for every state at least once a year. This means an actuary has to write up a long report full of data (claims paid, premiums received, expenses incurred, investment income received, rate differentials by territory or classification) and submit it to the state DOI. Then a regulator at the DOI looks it over and either 1) approves it or 2) sends the insurer an "objection letter" stating the many things they don't like about the filing. There is a huge difference between state DOIs. Some are extremely lenient and will rubber-stamp approve almost any filing, as long as it's reasonable. Unless you're increasing rates by 100%, or implementing an explicitly racist class plan, these states will approve your filing very quickly. Bless them. (Typical overall rate changes are in the 5-10% range, usually just keeping up with inflation. And rating based on race is explicitly against the law in every state, and probably a violation of federal law, too.) Other states are extremely picky. Sometimes they are nettlesome for no particular reason. Sometimes the regulator does not have any statutory authority for their objection, but rather they are objecting to something that they just don't like. Most states have some catch-all statute regarding insurance regulation that reiterates the definition of an actuarially sound rate: A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk transfer. Emphasis mine. Do you see a problem with this? "Unfair" is extremely subjective. A statute reiterating this principle basically gives the regulator carte blanche to object to anything they don't like. One state (a very, very northern state) will cite statutes in their objections, but when we look them up we always find that they refer to this boilerplate language about actuarially sound rate-making. It's almost never a reference to a law explicitly banning something in our proposed rating structure.

If regulatory overreach is one annoying problem, regulator incompetence is another. Often the regulator is not an actuary or is not otherwise technically savvy. Or sometimes they are actuaries who lack the specific technical expertise to understand the rate filing. Insurers are increasingly sophisticated in their pricing and using increasingly complex methods for segmenting risks. The industry standard for a long time has been the generalized linear model, or "glm". If you've ever found the "line of best fit" through a bunch of points in a math class, the glm is just a more sophisticated version of this, with an arbitrary number of dimensions (not just the two) and with different kinds of penalties for "missing" points on the scatter-plot. A glm is not all that complicated. Using one gives you a rating plan with multiplicative factors, as in my example above. The model tells you: "Multiply by 1.05 for male, 1.2 for unmarried, 0.90 for fair credit history, 1.00 for no prior incidents..." Simple as this is, I would say that most regulators are pretty clueless even when it comes to glms. But insurers are increasingly using things far more complex than these. Gradient boosted models (gbms) are insanely complex decision trees with thousands of branches. Neural nets are extremely complex systems of variable weights and triggering-thresholds. Increasingly these even more complicated models are being used to design (at least to inform) our rating plans, and yet many regulators are still perplexed by the relatively simple glms.

We try to do our best when it comes to justifying our glm results, but honestly most regulators wouldn't know what they were looking at if we gave them a filing exhibit spelling out everything in perfect detail. Sometimes they ask telling questions that betray their lack of understanding. I literally had an objection letter once that asked, "What is 'multivariate analysis'?" Perhaps it's a non-standard term, but anyone even remotely familiar with recent trends in the industry would know this is a reference to glms and related methods. It is in contrast to "univariate analysis", in which the mean for each group is calculated and the relative averages are used to set the rate differentials. For example, "Males cost 1.2 times as much as females to insure, so apply a 1.2 factor to males and a 1.0 factor to females." The "univariate" approach is wrong, because males could have other risk factors driving the difference. Maybe the average male customer for our company is younger, has worse credit, etc. A glm automatically accounts for these correlations between different rating variables. That is why we use them. None of this is terribly obscure, either. The reasons for using glms are described in detail and the methodology is fully fleshed out in several of the actuarial exams (grueling industry exams that people in my tribe have to take to earn our designation). Another typical question is something like, "How do you avoid double-counting if two rating variables overlap?" or "How do you adjust for correlations between rating variables?" The answer is that I don't have to, because I'm using a glm. A colleague once asked me how I answered such questions, and I said something like the previous sentence. We busted up laughing, because my blunt answer (which I would never really give to a DOI) points out how thoroughly the questioner is missing the point.

Another typical question is something like "Please provide the data used in this analysis." Once again, this betrays a complete lack of understanding. The underlying data in a glm is a gigantic table containing millions of records, probably in the tens of gigabyte range for a decent sized insurance company. The regulator doesn't actually want this, and probably doesn't have the technological capacity to even accept a file transfer of this size, and almost certainly could not perform an independent analysis if we sent it to them. At any rate, it would completely compromise our competitive position and (more importantly) our policyholders' privacy/security if we were to send around such a comprehensive database of our customers and their claims payments. (DOIs aren't always so diligent about security. I have seen pages from competitor filings marked with big red letters saying "CONFIDENTIAL", as in "The insurer marked this as confidential but the state DOI did not honor their wishes. They just published it with everything else, because they couldn't be bothered to separate out the 'public' from the 'confidential' files.") My best guess is that the person asking for "supporting data" is still in the univaraite mind-frame. They think they are asking for a few summarized tables showing, say, claim payments by gender (or age or credit), number of policies in each category (termed "exposures" in the industry), and a loss relativity, thus supporting the rating factor for each variable. Unfortunately there is no way to fairly "summarize" the data underlying a glm. The entire database goes in, and the rating factors come out. It's a sophisticated calculation that requires all the data at once.

Sometimes there are "filing forms", which are lists of questions that we have to answer in our filing which are the same each time we file. At least the DOI is telling us ahead of time what it wants, rather than asking for several rounds of clarification after-the-fact. In theory, this can be a time saver and allow us to preempt questions and get the filing approved more quickly. In practice, these are a waste of time and can open up the insurer to further rounds of questioning because they DOI doesn't understand the answer to the question it asks. ("Give me a statistics lecture! Mmm hmm. Mmm hmm. And what is this 'multivariate analysis' you speak of?") These filing forms frequently betray a lack of understanding. One that I helped fill out recently asks about a "test for homoscedasticity." Homoscedasticity means that the points are evenly distributed around the best fit line; they aren't closer to the best fit line for small values and further from the best fit line for large values (or vice versa). The question betrays ignorance about glms, because in a glm you explicitly relax this assumption. A traditional linear model insists on normally distributed residuals with a constant variance; a glm allows one to choose a gamma or poisson or some other kind of error structure, which allows the variance to be a function of the mean (the y-value of the best-fit line). If that's all very confusing, don't worry about it. What's happening here (I think) is that someone copied and pasted a few lines of text from a linear modeling textbook without understanding what they were copying. Many filing forms ask about the R-squared or adjusted R-squared, and ask if the residuals are normally distributed (essentially reiterating the "homoscedasticity" question without realizing they've asked the same thing twice!). Once again, they are failing to understand the very basics of a glm, a standard insurance industry tool. These questions apply only to traditional linear modeling and don't apply to the glm world.

Don't mistake me as saying that regulators should develop a sophisticated understanding of these models so they can really grill insurers about how they are being used. Some moderately sophisticated regulators do ask reasonable questions about methodology. ("Did you control for geography? Did you offset with your limit and deductible factors?") The problem here is that there are a thousand "right" ways to do something. One modeler might think it's absolutely necessary to "offset" your model with your coverage limit factors (which are more appropriately calculated outside of the glm; this is the 50/100/25 or 100/300/50 that you see on your insurance policy in your glove box). Another might think it's okay to not offset, so long as you have the various limits in your model as a control variable. Another might think it's okay not to even bother with this control variable, because every time she's ever done this in the past, she got the same factors with and without controlling for limit. It would be a mistake for a regulator to assemble a list of "best practices" from the actuarial literature and start grilling every insurance company about whether they're complying with those standards or not. (And "Why not!?") I've talked to very senior glm builders, gurus for the profession, who have very different ways of building these models. It's a mistake to think there's a "right" way of doing things. It would be wrong to waste time and resources demanding that a company show the results if the model were built some other way. At best, the technically competent regulator should see their role as a guiding hand, perhaps gently suggesting that an unsophisticated insurer might get a better result if they built their model some other way. But they shouldn't be grand-standing on their checklist of best practices and holding up someone's rate filing.

Regulators vary in their level of rudeness. Some are extremely boorish. I guess they figure you aren't really a "customer." You have to deal with them and accede to their demands. I guess they figure that if courtesy takes any effort at all, it's not worth it. Fortunately, most of these people turn back into human beings once you get them on the phone and they have to talk to you. (Most.) But even in the case of a "polite" regulator, this person is often asking for lots of unnecessary busy-work.  This person wields the power of the state, and can use it to uphold your filing. The resulting busy-work can result in hundreds of man-hours of labor and tens or hundreds of thousands of dollars in lost revenue due to unnecessary delays.

Sometimes incentives are poorly aligned. Many states use outside consulting agencies to review all rate filings. Many of these agencies are paid by the hour, or awarded for each "infraction" they find. So they have an incentive to create busy work to create billable hours and find "infractions" no matter how trivial. A company I worked for once got fined after a "market conduct exam" because our rating manual said we would surcharge customers who paid late, but we never did surcharge them. I think it was just a matter of us wanting to have something to threaten late-paying customers with, but not actually wanting to annoy them every time they paid late. So we never put in place the process to actually surcharge them, or we had a process but never pulled the trigger on it. It's the kind of reasonable latitude that companies grant their customers all the time, but these regulators saw an opportunity to fine us and they pounced.

Every state has an insurance commissioner, who generally oversees the state's DOI. Some are elected and some are appointed. Elected commissioners might face different political incentives than appointed ones. Appointed commissioners usually are older insurance professionals who have some interest in public service. They might be more technically savvy. They typically understand that prices have to go up to keep up with inflation, that price differentiation is necessary to a functioning insurance market, that locking in low rates will make insurance less available, etc. These people may understand things about the realities of insurance pricing that the voting public doesn't. Elected commissioners, on the other hand, might campaign explicitly on a platform of "I will not approve any rate increases." A populist back-wind may allow these commissioners to behave incredibly irresponsibly and compromise the insurance market in their state. They end up not approving reasonable rate increases, or placing unreasonable caps on rate increases, or holding up rate filings for months before finally relenting when things aren't going well.

With all this regulation, what benefit does the insurance customer actually see? Surely they get a rate that's, say, 10% lower, right? No. That would be an absolutely intolerable rate inadequacy and no insurer would stay in that market for long. Insurers actually have higher insurance premiums because of regulation. We have to hire teams of people to stay informed and up-to-date on regulations and various law changes. We occasionally have to physically fly representatives to rate hearings in other states. We have staff dedicated to preempting and responding to regulatory actions. All of this is ultimately paid for by the insurance customer. There is no one else to pay it! The regulatory lag I mentioned above may not actually cost the insurer any revenue. More likely, the insurer assumes this lag in its business process. They either start the process of the rate filing earlier, or they take a slightly higher rate increase to account for the lag. (If my rate filing will take three months of regulatory approval time, for example, I will build in three months worth of inflation into my calculation indicating how much rate to take.) There is also labor on the regulator side. Someone has to pay for the staff or the state's department of insurance, to keep the lights on and to keep the building heated and cooled. This may be paid for with insurance taxes, or it may come from a general state revenue. Either way it comes out of the pockets of insurance customers. And what do they get for all this? At best, maybe some insurers get a 10% lower bill, but at the cost of someone else paying 10% more. Regulation doesn't result in overall lower insurance costs. It just means that some customers pay slightly more and some others slightly less. If a state DOI managed to truly hold down overall prices in their state, insurers would start to exit that state's insurance market.

For an example of insurers exiting the market completely, see the Florida market for homeowner's insurance. Most of the cost of Florida homeowners insurance is due to infrequent but catastrophic hurricanes and other tropical storms. Historical losses will not be truly indicative of future expected losses, so insurers need to use simulations to estimate their actual exposure to hurricane risk. Computer simulations of thousands of storms are run, and the resulting damage to existing homes is estimated based on these simulated storms. The Florida Office of Insurance Regulation is extremely picky about what what kind of hurricane model you can use. The regulation of these models is so onerous as to be punitive. Florida's regulation of hurricane models is an example of regulators being relatively sophisticated but still not adding any value to the insurance market. (Well, adding negative value, in that they've driven insurers out of the state.)

I try to view this all charitably. Maybe even though every action taken by regulators looks like a waste of time and resources, market discipline would totally collapse without them? The marginal action of a regulator looks silly, but maybe the overall effect of regulation is a positive one? It could be, but I find this hard to swallow. There is fierce competition in the market for personal lines insurance. You can get dozens, even hundreds, of quotes if you only have the time to shop around. There are thousands of insurers. It is a very thick marketplace. Some insurers will advertise their financial strength, others will give you a lower price because they lack the reputation of major industry players. Some will sell based on strong "customer service", while others will have no-frills service with a corresponding low-expense and lower premiums. Some will never deny a reasonable claim (thus costing more), and some will fight every marginal claim and even some reasonable ones (thus costing less). I don't think regulation has much of a role to play in such a thick market. Customers know they are taking a chance when they buy from a no-name insurance company with cheap premiums. They also know they can find a better price if they shop around a little. Most customers don't bother. They may complain about their insurance rate going up, but they can't be bothered with the minor annoyance of getting quotes from a few competitors. Oh, some certainly do. And insurers are paranoid about policyholder attrition. Insurers are often trigger-shy on taking the rate increases they need to, because even a necessary rate increase would threaten customer retention. They implicitly feel the discipline of the market when deciding how to set the price. They pour over competitor rates, customer retention statistics, and new customer acquisition numbers. The regulator adds no value to this process.

I don't think any of this is necessarily unique to insurance. I would imagine other industries have similar problems regarding regulatory incompetence and regulatory overreach (or perhaps forbearance). Fundamentally, government just doesn't have much to offer us in terms of market regulation.

Friday, October 6, 2017

Estimates of the Uninsured: Worse than Useless

Every time there is any movement to change health policy at the federal level, I hear estimates that “X million people will lose their insurance under the Republican plan” or that “Y million people gained insurance under Obamacare.” I think these are useless statistics. It’s not like being uninsured implies zero access to health care. People with no coverage and no assets get tons of free treatment all the time. If you’re homeless with no health insurance policy and no money but you go to the ER suffering a heart attack, you will get an angioplasty for free. Conversely, people in other developed nations with “universal healthcare” often have long waits to see a doctor. Often they want a treatment but are told “no.”  Also, as I’ve pointed out before, coverage status just doesn’t appear to correlate well with actual health outcomes. It’s not like those millions of people who got coverage under Obamacare suddenly got healthier. (Are there any empirical estimates of the effects of the ACA showing large, positive, unambiguous health effects? If so, please share.) Likewise it’s not likely that they’ll suddenly get sicker once they lose their so-called coverage. (Several examples of "uninsured" Americans consuming more healthcare than their Canadian neighbors here. If you know of a more systematic comparison of this type, please share.)

I’d like to see something more meaningful than a count (really an estimate) of how many Americans “gain” or “lose” coverage under some health policy proposal. I’d rather see an estimate of wait-times, perhaps broken down by covered versus not-covered. Or an estimate of the likelihood that someone will be treated, or receive some particular treatment. “X million Americans will see their wait-times for an office visit drop by Z-percent.” Or “X million Americans will get Y-percent more MRIs and Z-percent more mammograms.” Ideally this could be turned into a mortality rate estimate, and the estimate could be measured against the actual observed mortality change after the policy passes. The effect of health policy on health outcomes is, after all, an empirical question. We should ultimately have some objective means of deciding whether the policy succeeded or not.

I’m a bit tired of hearing claims that some Republican tweak to the ACA is going to plunge millions of Americans into Dickensian poverty and illness. Not that I’m defending the Republicans or any particular proposal they’ve put forth. (If I were to put forth my own proposal, it would be far more radical and go a lot further than anything the GOP has proposed.) Rather I just don’t think that health policy has that strong an effect on actual health outcomes. 

Wednesday, October 4, 2017

A Simple Value-Neutral Model of Rising Income Inequality

Suppose that the range of options has expanded in both directions. There are more ways to make a lot of money, and there are more ways to live comfortably without earning much or without earning anything at all. Next, suppose that people vary in their preferences. Some prefer more income with less leisure, and some prefer more leisure with less income. Think about what happens to naively-measured “income inequality” in this world.

I’m nearly certain both conditions in the above paragraph are true. Incomes (conditional on working) have risen, and it’s easier and much more common these days to be a “live-in-your-parents’-basement-playing-video-games” man-child. I don’t think that the corporate lawyer and the under-employed man-child were cast into their roles by a cosmic role of the dice. People choose their professions in large part based on their preference for the leisure/income trade-off.

If annual income is the metric on which we’re to measure “inequality” (and it’s a phenomenally bad one), then we should expect it to increase as the world gets richer and more prosperous. If we picked a more relevant measure of economic well-being (like consumption, while perhaps monetizing leisure to put it on the same level as other forms of consumption), we’d see that the world is much more equal. 

I don't have a ton of data to bring to bear on this simple model. I have read that when you measure the activity of unemployed men of prime working age, they are spending a lot of time playing video games (citation needed). Anecdotally, I know a lot of people who could have earned more but deliberately chose not to. They picked a b.s. (lower-case) major in college, or they picked a decent career path but weren't "gunner" enough about it, or they finished their undergraduate degree but decided at the last minute not to go on to law school. As the title says, my explanation is value-neutral. I'm not judging these people for not working harder and I'm not going to insist that they all made mistakes (though I suspect that some of them didn't act in their own long-term self-interest).

Now think for a moment who is likely to attribute their success mostly to chance versus mostly to effort. Think about who will be more apt to notice and remember obstacles to their success. Who is more likely to rationalize bad decisions? I'm guessing that lower-income, lower-status folks are more likely to perceive (imagine?) barriers to their success. 

Really Bad Arguments Against Repealing Drug Prohibition

This will not be a comprehensive argument in favor of drug legalization, just a list of really bad whoppers I have heard and my responses to them.

“There will be a huge surge in drug use.”

This is the most obvious objection, and it’s wrong for a number of reasons. In historical cases where the legal status of a drug has been changed, you just don’t see that large a demand response. In the United States most recreational drugs have been illegal for a very long time, so it's hard to say what demand was "before" and "after." But use rates have failed to respond to massive shifts in drug enforcement efforts. Also, massive changes in use rates of any particular drug have fluctuated wildly despite there not being any change in enforcement effort. In other words, neither the legal status nor the intensity of enforcement appears to affect usage rates by much. (The empirical evidence for this is fully fleshed out in Jeffrey Miron's Drug War Crimes and also in Matthew Robinson's Lies, Damned Lies, and Drug War Statistics. I'll stop there, because I don't want to list every book on my "drug policy" shelf.)

I think the people who say this are implicitly assuming that the only thing holding people back from drug use is the legal status of the drug, which is a very absurd assumption once you say it out loud. The main thing keeping people away from dangerous drugs are the inherent risks of addiction, social dysfunction, drug-related health problems, and overdose. People who are willing to endure these risks are not much affected by adding legal risks on top of these. The people who want do use these substances are already doing them. It is absurd to think that people are undeterred by the pharmacological risks of drug use but then respond strongly to the legal risks of drug use. (Remove the words "pharmacological" and "legal" from that sentence to see the absurdity. To make drug prohibition sound like a good idea, someone has to actually square this circle.) There isn’t an enormous pent-up demand that will surge forth if the dam of drug prohibition bursts.

“Bad guys will just find something else to do.”

I first heard this one at a debate on drug legalization at my undergraduate university, and I’ve heard it a few times since. This is the kind of thing that people can only say if they have not incorporated any economics into their worldview. Proponents of drug legalization often argue that much of the violence in society is due to black market crime. (Again, see Drug War Crimes, which has an entire chapter devoted to this topic.) Drug dealers killing each other over territory, killing witnesses, killing or beating subordinates, drug users retaliating against a dealer who ripped them off, etc. There really is quite a lot of this kind of violence. It makes up a significant fraction of total murders and assaults. This becomes very clear if you look at countries like Mexico or Columbia, where the violence is almost noticeable in everyday life. It exists in the United States, too, even if to a lesser degree.  

When you make something illegal, you don’t actually stop people from producing and selling it. All you do is ensure that the most violent individuals will be in charge of production and distribution. Simply put, there are more bad guys in the world because drug prohibition has made it more lucrative to be a bad-guy. The proponents of this argument are making some kind of daffy assumption that there is a fixed number of wrong-doers, regardless of the relative costs or rewards to being a wrong-doer. Most of these people are “law-and-order” types who love heavy criminal penalties, so it is truly stunning to hear them argue that the bad guys don’t actually respond to incentives.
To anyone who is committed to this viewpoint, we legalizers happily accept your surrender. If, by your own admission, bad guys will do bad regardless of the rewards or penalties they face, legalization is a no-brainer.

I suspect that this argument is simply an ad hoc attempt to deny one of the major benefits of drug legalization, given that it’s (usually) contrary to the speaker’s actual worldview. It’s the kind of argument you get when people try to “wrack up bullet-points” rather than actually think about what they are saying.

“Drug prices won’t fall much, so you’ll still have all the economic crimes by drug users trying to finance their habit.”

I heard this one recently, and it’s new to me. It’s another ad hoc attempt to dismiss an argument in favor of drug legalization, but in fact someone who takes this position seriously is actually making an incredibly strong case for legalizing drugs. The whole purpose of drug prohibition is to make drugs so expensive (in monetary and other costs) that people stop using them. If the drug warriors are ready to admit failure on this front, once again I’d happily accept their surrender. I don’t understand how someone could still favor drug prohibition after insisting that prohibition has failed to achieve its one true objective. Nevertheless, I have heard this claim more than once, and by people who put drug "offenders" in prison. Legalizers like me sometimes make the argument that if drug prices are allowed to fall to their true market value, there will be far less property crime from addicts trying to support a habit. These people can find real jobs and live lives with normal schedules, rather than constantly seeking their next fix and stealing or "hustling" to finance it. I view the "drugs won't get cheaper" argument as a pathetic attempt to deny this benefit. 

In actual fact, drug prohibition has increased the market price of drugs. The black-market markup has been exaggerated by some writers; it’s not in the “factor of 100” range that you sometimes hear. In “The Effect of Drug Prohibition on Drug Prices: Evidence from the Markets for Cocaine and Heroin”, Jeffrey Miron concludes that the black market price of cocaine is 2-4 times the legal price and heroin is 6-19 times the legal price. Not exactly a “factor of 100” (an extreme claim that Jeffrey Miron is attempting to tone down) but still a significant financial relief for the severe addicts who spend most of their resources feeding an expensive habit.

“Drug laws are a good way to arrest real criminals when those crimes are hard to prove.”

This one is shocking to the conscience. It is pretty disturbing to hear law-and-order types suggest that drug laws allow an end-run around the constitution, and that this is a feature rather than a bug. I’m sure they have a point. If you “know” someone is a criminal, it’s probably easier to pat them down and find a baggy of drugs than to actually discover evidence of a real crime. That being said, I’m always disturbed by the confidence that law enforcement types have in their own estimates of who is or isn’t guilty. 

I dearly hope that proponents of this argument aren’t actually saying that we should make something arbitrarily illegal just so the police and prosecutors can arrest and imprison whoever they want to. I suspect this is just a throw-away, “Oh, by the way…” kind of argument. Perhaps it doesn’t, on its own, support the policy of drug prohibition, but is in some sense a mitigating factor to an otherwise bad policy. I don’t approve of this viewpoint at all. In fact, I think that too many resources are diverted from policing real crimes to policing drug crimes, and that’s part of the reason for social decay in some neighborhoods. If not for drug prohibition, there wouldn’t be so many missing young men spending time in prison, there wouldn’t be as many shattered families, and there wouldn’t be so much distrust of the police. Under those circumstances, maybe the communities could actually forge some kind of relationship with the police, and real crimes would actually get solved because of the resulting cooperation.

That's it for now. I hate to do these "fish-in-a-barrel" responses to really stupid things that I've heard. I like Scott Alexander's concept of steel-manning an argument, as in "making the argument under scrutiny as strong as possible, even if the person delivering it wasn't very articulate or reasonable." But I've heard these silly claims so I might as well respond to them and say why they're wrong. I plan to eventually do a long round-up post that unifies arguments in favor of drug legalization made in several earlier posts. 

Sunday, October 1, 2017

Welcome New Readers!

A recent post of mine got picked up by Scott Alexander in a link roundup. I was astonished to see the amount of traffic that came to my blog via that one link. I shudder to think what an entire Slate Star Codex post dedicated to the the topic might have done. I rarely get comments, but I got a few on the post. And I could tell that people were skimming my older posts, and even commenting on a few. Lurkers are of course perfectly welcome, but I appreciate any feedback I can get. I want to welcome new readers I've picked up in the past week or so.

I'm sure curious readers have perused my previous posts. If you're reading this on a computer or tablet, you should see my most-read posts on the right-hand side. I have a large number of posts arguing against drug prohibition starting around February 2016. I have a couple of posts about what thoughtful comments do and don't look like, here and here. I have a few scattered posts about so-called "inequality", how health insurance should work, and "moral outrage" as a debating tactic (one that I am finding increasingly obnoxious).

A few things I noticed.

Most people don't read all that carefully. That post, which attempts to debunk the standard narrative of the opioid epidemic, had at least a dozen links to prior posts by me which contained supporting information. Fewer than 10% of readers clicked on any of those. I'd hope that a larger share of readers would think, "Huh, is that really true? Why does he think that's true? Oh, there's a link arguing that this is true." Of course, many of those links were to places other than by own blog, and maybe people were scrupulously checking the various government documents and other articles I linked to. I promise that I'm doing my best and will never deliberately bend the truth, but I also sincerely hope nobody ever simply takes my word for anything I claim on this blog.

Some of the comments I got were great. And some were terrible. I made a couple of edits on my post after reading those comments (some here and some at SSC). One was to correct an error (one that I thought was not material, even to the very narrow argument in that particular paragraph). One was to clarify something that was not an error. (I called meditation "basically a placebo treatment", which should not be construed to mean I think meditation isn't effective for pain management. Just that I have an expansive definition of "placebo." After all, imagine doing an experiment where one group gets "real" meditation and the other gets "placebo" meditation as the treatment...) One thing I didn't care for was how easily people will conclude that you're deliberately lying. One comment, if I'm reading it correctly, implied that I was "lying about" a statistic cited in the Vox paper. Another implied that one of my claims was "dishonest." Is this how the rationalist community points out mistakes, and even disagreements that can't really be called "mistakes"? Mostly not, but it was a little bit grating to get this treatment over immaterial details. To say that somebody is "lying" implies something about their motives, which usually the accuser doesn't know. Anyway, the good comments outweighed the bad ones, and even the bad ones forced me to think harder about my arguments. (Bad commenters sometimes improve your understanding in the same way as a small child who keeps asking "Why?" to each successive answer.)

There were some excellent comments at Slate Star Codex about how people are actually using opioids. Consider it a small, random sample, but it's still illuminating. Considering the examples given (a broken arm, skin scraped to the bone, oral surgery), I'm very glad these people got powerful painkillers. I really hope that Vox does not have the effect on health policy that it wants to have, which would probably deny a few of these acute pain sufferers the relief they seek.

Free Medicine Doesn't Make People Healthier

This is from Free For All? Lessons from the RAND Health Insurance Experiment by Joseph Newhouse. It's not exactly a page-turner. It's more of an eat-your-vegetables kind of book. I've been thumbing through it recently. I am familiar with the conclusions (which I'll share below) because of the classic article Cut Medicine In Half by Robin Hanson. That piece was the lead essay in a Cato Unbound forum. I had thought that maybe Hanson drew some weird contrarian conclusions from the study. Indeed three other health policy wonks disagreed with him (err...without actually disagreeing with him; you'll have to see what they say and how they fail to meaningfully respond to Hanson).  Not contrarian at all, actually. Hanson was pretty much drawing the most straightforward possible conclusion from the RAND study. This slays some political sacred cows, but people should face the information with their eyes wide open. They shouldn't be engaging in casuistry to avoid the obvious. It's fine to speculate that "The effect of free medicine is clinically important, but it's hard to see in small datasets because of 'statistical significance' issues." But people who take such positions should admit that they are speculating beyond a straightforward interpretation of the best data we have on this question.

 Here's the relevant part (starting on page 201; emphasis mine):
For the average person there were no substantial benefits from free care (Table 6.6). There are beneficial effects for blood pressure and corrected vision only; ignoring the issue of multiple comparisons, we can reject at the 5 percent level the hypothesis that these two effects arose by chance, but we do not believe the caveat about multiple comparisons to be important in this case. We investigate below the mechanisms by which these differences might have arisen; the results from these further analyses strongly suggest that the results did not occur by chance.
For most health status measures the difference between the means for those enrolled in the free plan and those enrolled in the cost-sharing plan did not differ at conventional levels. Many of these conditions are rather rare, however, raising the possibility that free care might have had an undetected beneficial effect on several of them. To determine whether this was the case we conducted an omnibus test, the results of which make it unlikely that free care had any beneficial effect on several conditions as a group that we failed to detect when we considered the conditions one at a time. 
If the various conditions are independent and if free care were, for example, one standard error better than cost sharing for each measure, then of the 23 psychologic measures in Table 6.6 we would expect to see four measures significantly better on the free plan (at the 5 percent level using a two-tailed test), and none significantly worse. Among the insignificant comparisons, 15 would favor free care and only 4 would favor cost sharing. In fact three measures are significantly better on the free plan and none is significantly worse, but 13 of the 23 measures rather than the predicted 4 favor the cost-sharing plan. Hence it is very unlikely that free care causes one standard error of difference in each measure. If the independence assumption is violated, the violation is probably in the direction of positive dependence, in which case accounting for such dependencies would only strengthen our conclusion. Moreover, one standard error of difference is not a very large difference -- about half of the 95 percent confidence interval shown in the fourth column of Table 6 (equal, for example, to one milligram per deciliter of cholesterol). 
The same qualitative conclusions hold for persons at elevated risk (table 6.7). In this group, those on the free plan had nominally significantly higher hemoglobin but worse hearing in the left ear. Again outcomes on 13 of 23 measures favored cost sharing.

Staring at the top of page 204:
Hypertension and vision. Further examination shows that the improvements for hypertension and far vision are concentrated among those low-income enrollees at elevated risk (Table 6.8). Indeed, there was virtually no difference in diastolic blood pressure readings across the plans for those at elevated risk who were in the upper 40 percent of the income distribution. 
Because the low-income elevated risk group is small (usually between 5 and 10 percent of the original sample depending on the health status measure), the outcome differences for that group between the free and cost-sharing groups have relatively large standard errors. These results might be taken to mean that we missed beneficial effects for the low-income, elevated risk group for certain measures. But although this might be the case for a small number of measures, it is unlikely to be generally true. If we apply the same omnibus test just described to the low- and high-income groups shown in Table 6.8, we would expect that if there were a true one standard error favorable difference for the free plan for each measure, 2 of the 13 comparisons in Table 6.8 would be significantly positive and 2 would be negative, but none would be significantly negative. Of the 9 that would be insignificantly positive at the 5 percent level, 6 would have values of significance between 5 and 20 percent. The data in Table 6.8 show that for the low-income group, none (rather than 2) of the 13 comparisons is significantly positive at the 5 percent level; 4 (rather than 6) are significant at the 20 percent level; and 4 (rather than 2) are negative, one (acne) significantly so. For the high-income group, 7 of the 13 results favor the free-care plan, and the results are even "less significant" than one would expect at random (that is, one would have expected 2 or 3 differences "significant" at the 20 percent level among 13 comparisons, even if there were no true differences, whereas only one comparison was significant at this level).
Sorry, you'll need to get the book to see the actual charts. (I typed this while looking at my copy of the book and double-checked it. I sincerely apologize if I mistyped something, but on a double-check what I type matches what's in my book.) I like this concept of an "omnibus test." Note that the question isn't exactly "What dimensions of health improve when we give people free medicine," but rather a much more modest "Does free medicine improve health at all?" I like this exercise of saying, "What would I expect to see if free medicine had a significant effect on health?", comparing that to the observation, and concluding "What we predicted did not match what we observed." Keep in mind that the people with free care consumed something like 30-40% more medicine, apparently to no effect.

There is much more in the book, all in a similar vein. Giving people free medicine, even at-risk, low-income people, doesn't seem to make them any healthier. If someone want to take issue because the sample size is too small, I will join them in asking the RAND study to be redone with a much larger sample size. I won't stand for someone insisting that no data whatsoever, however carefully collected, can ever have policy implications that they don't approve of. That seems to be most of what I get from the popular media. Whenever there is a proposal to change health policy, there is a lot of shrill doom-saying by the proponents of socialized medicine. They speak as if any reductions made to the medical welfare state represent a lethal threat to people in poverty. I get the sense that they don't even realize they're making empirical claims. Well, we have the RAND study, and more recently the Oregon Medicaid Experiment. We have two randomized controlled experiments demonstrating that free medicine just doesn't seem to have health benefits, and we have tons of observational studies coming to the same conclusion.

Friday, September 29, 2017

The Proper Role of Legislature In Society

Most days I'm an anarchocapitalist. On my off days I'm an extreme minarchist. I've been thinking about what the proper role of a legislature is in society, including whether there is a place for such a thing in a world of zero or near-zero government. There probably is, but the role is a much smaller and more circumscribed role than the one played by the legislature in most modern democracies.

"But in anarchocapitalism," the naive response might begin, "don't you simply contract for everything? Isn't the argument that you don't need government because everything is spelled out in contractual agreements between individuals?" This is wrong, because there are bound to be disputes in any society under any conceivable system of government. Contractual language is bound to be ambiguous, leading to disputes between contracting parties. In a nation state, there are also disputes about what "the law" actually means, and citizens must occasionally sue their governments to assert (or at least clarify) their rights. So that's the first point: under any system of government, you need lawyers, arbitration, litigation, courts and judges to resolve disputes. You need someone who can bang a gavel and say, "Pay him back his $200, or you'll be marked a scoundrel." Anarchocapitalist writers, like David Friedman, Bruce Benson, and Peter Leeson, have quite a lot to say about dispute resolution in the absence of government. You might even say it's the focus of anarchocapitalist literature. Any boob can say, "This all works out fine, because there are prior agreements and the rules are spelled out plainly." But disputes are inevitable. The interesting stuff happens when there are disagreements about the rules and contracts that have been written.

Suppose you have commonly-occurring disputes that are resolved in different ways in different regions. But some firms operate across these different regions, so they need to be compliant with various inconsistent judicial rulings. This could end up being incredibly stifling, because the rule of law is not clear. It differs from one city to the next, because one judge zigs while the other zags. It might be desirable to impose some consistency such that people know the rules ahead of time and can plan accordingly. This is the proper role for a legislature. It should be to clarify the law and resolve opposing judicial decisions regarding similar cases, not to invent new law out of whole cloth. Some amount of pure policy-making might be unavoidable. Maybe your minarchy needs to go to war with a neighboring state, or maybe you face some internal crisis of the kind that was constantly plaguing the United States early in its history. Perhaps there are a few coordination problems too large to be solved privately. We need a canal dug or highway built, so we need an elegant way to secure easements across many thousands of properties. Or we need a unified system of intellectual property rights so that innovators with very high research costs can recoup their investments. I'm sympathetic to the idea that some projects are rendered incredibly difficult, even unfeasible, without a central government overseeing it. Mostly, the legislature should see its role as clarifying existing law, not changing the law to something that some central planner deems desirable. Inventing penalties for the use and sale of drugs, limiting immigration into the country, constraining the allowable range of labor contract provisions, and redistributing wealth are not the proper purview of the legislature. We need to be a lot more humble about our ability to reshape society to make the unpleasant things go away. Legislatures should get out of the business of banning things. But it's probably useful to have groups of representatives assemble and review the existing law to see if it makes sense.

(Great podcast on law versus legislation on Econtalk here, with Don Boudreaux as the guest. "The law" is the set of rules we all implicitly follow. "Legislation" is the set of dictated rules made by fiat. "The law" says you may speed up to about 5 over the speed limit, while "legislation" restricts you to the number posted on the sign. At least in Boudreaux' usage. At any rate this is a very useful concept.)

I'll take an example I've used before: a careless employee who can easily lose his employer a lot of money. The employee takes a call, mishears an order, and upon delivery the customer says, "That's not what I ordered." The employer, who can't tolerate money-losing misplaced orders, fills the correct order and makes the employee pay for the cost of the misplaced one. The employee says, "No fair! Occasional wrong orders are a cost of doing business. It doesn't say anything in my contract about having to pay for a misplaced order. This eats up an hour's worth of my five-hour shift!" The employer says, "This is your fourth misplaced order this week. You are costing me money, so you're going to have to pay for your mistakes." Here there is a legitimate dispute over who should pay. If the employer is wrong and withholds a misplaced-order's worth of wages, he is stealing from his employee. If the employee is wrong, he is insisting on the right to injure his employer with impunity. Someone will have to step in and resolve the conflict. You want to make sure similar cases are resolved in similar ways such that everybody knows the rules ahead of time.

The anarchocapitalist has an answer to this. Judicial decisions can set the default rules, but parties can contract around these rules by explicitly specifying the details. But even here, maybe a judge rules that some obscure provision was buried in the fine print of a simple low-skilled labor contract, and such a person can't be expected to read and understand such provisions. Maybe these kinds of contract provisions get thrown out, and to my point get thrown out inconsistently across regions. Such a society might want something that functions like a legislature, clarifying the law by fiat. Another anarchocapitalist answer is that the contracting parties can specify ahead of time which judge (or which dispute resolution firm) will solve any disputes. (David Friedman describes this kind of arrangement frequently in his talks and (I think) in his excellent book The Machinery of Freedom.) But then I can imagine a class-action suit in which a large number of employees cry foul. "No fair. You stacked the deck against us, so we're suing you in another court." Dispute resolution is hard. It's a little bit question-begging to say, "All potential disputes will be specified ahead of time in the contract," and it's just further question-begging to say, "Methods for resolving disputes will be determined ahead of time in the contract." There can be legitimate disputes over any of the details in a contract, including details about how disputes will be resolved. Once again, this kind of society might want a legislature that clarifies the rules.

Of course, legislation can also be confusing. An attempt to clarify can confuse. "Hmm...the law says: If X, then Y. It also says: If A, then B. Wait, does the condition X apply here? Or condition A? Judges?" So this needs to be done carefully. The inefficiencies of inconsistent judicial rulings might be too small to justify the drawbacks of having a legislature that rules by fiat. Don't mistake me as arguing that an initially anarchocapitalist society will inevitably institute some form of central government. I just want to modestly suggest that dispute resolution is really hard. It's going to suck. Even if anarchocapitalism is better than having a central government, these painful disputes over the rules of the game will arise now and then.

Thursday, September 28, 2017

Cycling Caffeine

I wanted to share my experiences with caffeine because it might be helpful to someone with a similar problem.

For a while I was a daily coffee drinker. I would have a 16-ounce cup in the morning and another one at lunch. I love the taste of coffee and the mild buzz from a low dose of caffeine. (A high dose is extremely unpleasant.) But this started to cause problems. I was fine for the morning and after lunch, but after my lunchtime coffee I would be miserable. As my caffeine levels dropped, I would get pretty severe withdrawal symptoms. Tension headaches. Fatigue. An all-around miserable feeling. Even an irregular heartbeat, which can be incredibly jarring. I even wore a heart monitor once. My doctor reassured me it was nothing to worry about, but it was still an incredibly unpleasant sensation. I thought there was something really wrong with me.

I knew my heart was healthy enough. I do a pretty extreme martial arts/gymnastics workout every day. I essentially never get physically tired from this. Still, like I said, jarring.

Just over a year ago, I got my own coffee thermos and started brewing my own mix. It would always be some combination of regular grounds plus some decaf grounds. I love the taste of dark coffee but don’t always want all the caffeine. So one day I had only a single cup of low-caffeine coffee in my thermos. That was all I had that day. I got all the symptoms. Fatigue. Headaches. Heart palpitations. It was an “Ah ha!” moment for me. This was what was making me miserable. I suspected that too much caffeine might have caused my occasional irregular heartbeat (and drinking too much coffee definitely sometimes has this effect on me), but I never suspected that caffeine withdrawal was the culprit. I decided I would tough it out. Rather than “cure” myself by drinking some coffee, I decided I’d just take the discomfort. If this fixed my feeling awful every day, it would be worth it. So I went off coffee for a few weeks. (A week into my caffeine withdrawals, Scott Alexander posted this wonderful piece about drug tolerance. Good timing, Scott.)

Like I said, I love coffee. I didn’t want to give it up forever. I made the conscious decision to start drinking coffee again, but this time I’d be careful with it. No more day-long tension headaches and heart palpitations. I decided I’d drink it twice a week. This works well for me. On Saturdays and Wednesdays I’ll mix a strong brew of coffee. Sundays, Tuesdays, and Thursdays I mix a tiny volume of regular coffee with a lot of decaf (which actually does contain small amounts of caffeine, BTW). I’ll have a single caffeinated soda on Mondays and Fridays if I feel like I need it. I tried doing coffee every other day, but my withdrawal symptoms came back. Twice a week at most is apparently all this delicate snowflake can handle without acquiring a real physical dependence. Cycling on and off caffeine has been a lot better than having it every day. I pretty much never have withdrawal symptoms, unless I cheat and drink coffee several days in a row. (I have done this just a couple of times on weeks when I was traveling and out of my normal routine.)

___________________________________________________

Yes, I know it's dangerous to draw any conclusions based on anecdotes. Yes, I know the "treatment" and the outcome can be a coincidence. 

Tuesday, September 26, 2017

Dark Text on Light Background Or Vice Versa?

Should I keep with the light text on a dark background? Switch to dark text on light background? No preference? I'm not seeing any convincing evidence that either one is inherently preferable to the other.

Monday, September 25, 2017

Prescription Opioid Abuse Is Declining

And the government failing to disclose it. I wrote about this in my previous post, but I think this is important enough to give it its own post.  The two graphs below are from page 7 and 26 of the SAMHSA's Behavioral Health Trends in the United States: Results from the 2014 National Survey on Drug Use and Health (here).





The top chart plainly shows that past month prescription opioid abuse is flat over the past decade or so, the era in which we supposedly saw an exploding opioid epidemic due to "over"-prescription of opioids (in some observers' estimations). The second chart shows that prescription opioid-related substance abuse disorders are also flat over the same period, even decreasing for some age demographics. 

In the 2016 version of this document, both of those charts go missing, even though the report shows similar graphics for all the other drug categories covered (cocaine, heroin, alcohol, marijuana, tobacco...). Why suddenly remove this information? I'd think the readers of this document would want to know that this trend is flat, particularly since they're hearing elsewhere that there's an "opioid epidemic" in this country. Like I say in my earlier piece, this is probably due to a methodology change in 2015 in which they started asking about "opioid misuse" and "any opioid use", which might make the numbers not directly comparable. But the question about "opioid misuse" is basically the same as the question from prior years, so I think they should have included it. It's not like adding the question about "any opioid use" dramatically changed the number of reported illicit opioid users. You can look at comparable sections of the text of both the 2014 and 2016 version. Here, once again, is my previous post:
[T]hat report says: "In 2016, an estimated 1.8 million people aged 12 or older had a pain reliever use disorder, which represents 0.7 percent of people aged 12 or older." That report, which I hadn't seen until just today, actually does not include the charts displaying the trend in prescription painkiller misuse and substance use disorders from 2002 to present (the charts shown above in this post). Why not? Why weren't those charts updated for the most recent report. Well, the 2014 version says, "The estimated 1.9 million people aged 12 or older in 2014 who had a pain reliever use disorder (Figure 31) represent 0.7 percent of the people aged 12 or older." The number of people with a substance abuse disorder regarding painkillers decreased by 100,000 people in the last two years. Is SAMHSA trying to disguise a decline in a widely publicized problem? Shame on them if they are.

I can find a similar duo of quotes about declining "past month use". The 2014 report says: "The estimated 4.3 million people aged 12 or older in 2014 who were current nonmedical users of pain relievers represent 1.6 percent of the population aged 12 or older." The 2016 report says: "An estimated 3.3 million people aged 12 or older in 2016 were current misusers of pain relievers, which represents 1.2 percent of the population aged 12 or older." Once again the chart is missing from the 2016 version of the report; if it were there it would show a sharp decline in past month painkiller misuse in 2016. Past month recreational use of prescription painkillers decreased by a million people, and the government is disguising this decline? My best non-cynical explanation for removing the charts is that 2015 was the first year that they started asking about "illicit painkiller use" and "any painkiller use" (previously they had just asked about illicit use). But then they should show the graph but have a footnote about the methodology change, like the Monitoring the Future report does.
And I include the following chart from the MTF report, showing what an appropriately-disclosed methodology change looks like:



I don't think there's some massive conspiracy to disguise a flat or declining trend here. In fact, I'd think the government would want to advertise its "success" in turning around a social problem. (Scare quotes around "success" because prescription opioid overdoses haven't declined, and any causal connection between the decline and some government policy would be suspect.) An editorial decision to remove that chart was made for some reason, though. This is pretty sloppy work, in my opinion.

Sunday, September 24, 2017

Debunking the Standard Narrative on the "Opioid Epidemic": A Response to Vox

This post will be a response to the standard narrative on the "opioid epidemic" using this Vox piece by German Lopez as a foil. I should make very clear that I am not picking on Vox or Mr. Lopez here. They have a lot of company. I see a lot of irresponsible and inaccurate reporting on the so-called opioid epidemic. Details are wrong, important facts and details are omitted, wild speculation is indulged, and selective use of "experts" is made to tie everything into a standard narrative. Also, this isn't personal. I have tried not to add any gratuitous barbs or insults to this piece. If the Vox piece got something badly wrong, or if Mr. Lopez suggests some policy change that would have horrible consequences, I try to point these things out as matter-of-factly as I can. This can feel like a personal attack even if done carefully, so I've tried to be aware of this. I hope Mr. Lopez finds his way to my piece and it influences his reporting.

 The title of his piece is "The Opioid Epidemic Explained", the subtitle (tagline?) is "The opioid epidemic could kill as many as 650,000 people in the next decade. Here’s how it got so bad." Both the title and tagline are incredibly misleading.

Lopez opens with this:
If nothing is done, we can expect a lot of people to die: A forecast by STAT concluded that as many as 650,000 people will die over the next 10 years from opioid overdoses — more than the entire city of Baltimore. The US risks losing the equivalent of a whole American city in just one decade.
Link preserved. If you open that link, you will find an article with the title "STAT forecast: Opioids could kill nearly 500,000 Americans in the next decade." 650,000? 500,000? What's going on here? Read the STAT piece. There are ten different scenarios, and Lopez picked the worst case scenario to sensationalize his story. Lopez links to the piece, and does say "as many as 650,000 people will die" over the next ten years. But why lead with the highest plausible estimate, rather than a mid-range estimate? The middle scenarios give you numbers in the 350,000 - 400,000 range. Still scary, I suppose, but why exaggerate? A careless reader will anchor to the 650,000 figure and not remember that it's an extremely pessimistic and unlikely scenario. And why sum across an entire decade anyway? A more valuable piece of information might be something like "risk per legal prescription" or "risk per user." (We'll get to that later.) It is incredibly bad "public health" analysis to sum up a risk across a huge population to get a large number, then build your case on how big that number is. It's even worse to sum across multiple years. Why stop at 10 years, anyway? Why not sum up 20 or 30 years and make it a cool million? Why not sum up across an entire century? If America had three times the population, and thus three times the expected number of overdose deaths, would the problem be three times worse? If some hypothetical future society with one trillion people had the same rate of opioid overdose mortality, would it be three thousand times as big a problem? I don't think so. The relevant measure of risk is per user per year (or some other relevant time period). This piece is off to a bad start.

From the next paragraph:
In 2015, more than 52,000 people died of drug overdoses in America — about two-thirds of which were linked to opioids. The toll is on its way up, with an analysis of preliminary data from the New York Times finding that 59,000 to 65,000 likely died from drug overdoses in 2016.
Once again, he anchors the reader to an irrelevant number: 52,000. A careful reader will pull out his calculator, multiply 52,000 by 2/3 and get the figure of 34,667. Then he immediately jumps back to a total number of drug overdoses for 2016, in a post that's supposedly about the opioid crisis. Why is he adding together cocaine, methamphetamine, and benzodiazepine-related deaths? Again, a careful reader will remember to multiply by 2/3, but why doesn't Lopez just say clearly what the relevant numbers are? It's as if I were writing a piece about vehicle-related fatalities by adding together auto fatalities and gun deaths, then said, "About half of these are auto-deaths."

The actual figure, by the way, is 33,204. That's adding together prescription opioids, methadone, synthetic opioids, and heroin, avoiding double-counting (most deaths involve multiple substances). And if you remove suicides and likely suicides, you get 29,490. Drug overdose data suffers from a similar problem to the one faced by "gun death" statistics, in that suicides are included in the total. This isn't the fault of the statistics, of course, but rather the fault of sloppy reporting that adds unlike things together to get an inflated total. Of the 52,623 drug overdose deaths in 2015, 5,215 were suicides and 2,979 were of "undetermined intent." Ninety-four were "murder." (Arguably some fraction of "undetermined intent" should be added in when tabulating "accidental deaths, but this number hasn't risen over the last 15 years so it's not really part of the trendline we're interested in.) I don't think it's fair to blame suicides on opioid over-prescription. You can argue that despondent addicts are giving up on life and killing themselves, but then you're speculating wildly about whether that person would have died if not for the causal factor we're interested in. See my thorough breakdown of the 2015 overdose deaths here.  I actually warn my readers near the beginning of the post:
"I suspect you will see a lot of news stories starting with “There were 52,600 drug overdoses in 2015…” If you see such a story, scan it to see if it gives a breakdown by “accidental vs intentional.” If it doesn’t, that’s a big warning sign that the author didn’t do their homework.
This warning certainly applies to the Vox piece. Anyone who is actually curious about this important topic can download the data from the CDC's website and dissect it however they like.

The Vox piece then gives the standard narrative explanation of how we got here:
Over the past couple of decades, the health care system, bolstered by pharmaceutical companies, flooded the US with painkillers. Then illicit drug traffickers followed suit, inundating the country with heroin and other illegally produced opioids that people could use once they ran out of painkillers or wanted something stronger. All of this made it very easy to obtain and misuse drugs.
The author should have told his readers that this is a wild guess. There is no convincing proof that the new heroin users are former prescription opioid addicts. Or, at any rate, there is no convincing evidence that prior prescription opioid use caused subsequent heroin use. The great fallacy in this "opioid epidemic" narrative is that the heroin users and prescription opioid users are the same population. In 2015, 85 million people used prescription opioids legally, and there were ~200 million legal prescriptions. By contrast, there were half a million heroin users. But there were comparable numbers of deaths in both categories (~13,000 from prescription opioids and 12,000 heroin deaths, or 18,500 heroin deaths if you add the "heroin" and "synthetic opioids" together to capture the fact that some dealers have been mixing fentanyl in with heroin). Some relevant numbers in my piece here, once again from the CDC's website. Deaths per legal opioid user are in the 0.015% range; deaths per heroin  user are probably somewhere in the 1% to 3.5% range. In other words, the prescription opioid deaths are a very small risk applied to a very large population. The heroin-related deaths are an extremely high risk applied to a relatively small population (about half a million users according to survey data, but see caveats in my piece about the size of the population of heroin users). These are very different issues with very different underlying social causes. It makes little sense to add them together just because the chemical mechanism is the same. The "opioid epidemic" isn't a thing. It's several things.

The Vox piece now weaves a narrative of irresponsible doctors prescribing way too many opioids. There are several drivers of this trend. There was a change in philosophy on how pain should be treated. Pharmaceutical companies developed new opioids and supposedly bamboozled impressionable doctors into over-prescribing them. There is no doubt that the sheer tonnage of opioids prescribed increased; the government tracks these figures and they are certainly rising. See the chart at the bottom of this page. But it's not at all clear that the expansion of opioid prescribing was inappropriate. Second-guessing doctors prescribing pain medicine is a very dangerous business. If Mr. Lopez is wrong, but the force of his argument nonetheless determines the course of US drug policy, he may be damning many people to unnecessary suffering. He says it himself in the Vox article:
On the patient side, there were serious medical issues that needed to be addressed. For one, the Institute of Medicine has estimated that about 100 million US adults suffer from chronic pain. Given that the evidence shows opioids pose more risks than benefits in the majority of these cases, patients likely should obtain other treatments for chronic pain, such as non-opioid medications, special physical exercises, alternative medicine approaches (such as acupuncture and meditation), and techniques for how to self-manage and mitigate pain.
I've seen the 100 million figure before. I don't know if it's right or not, but if it's even the right order of magnitude this is a huge problem. We should not be placing any restrictions on how doctors treat these patients, who according to the 100 million figure comprise almost a third of the US population. That's not to say 1/3 of us are constantly walking around in agonizing pain, but rather that a third of us have occasional flare-ups of intractable pain. It is downright cruel to take treatment options of the table.

It's a bit amusing that Lopez so cavalierly dismisses prescription opioids for chronic pain and then suggests acupuncture and meditation, which are basically placebo treatments.  Of course opioids work for pain management. People can feel the relief almost immediately. People have used opium for thousands of years. In Montana, where it's hard for chronic pain sufferers to get the treatment they need, many pain patients flee the state to get their necessary prescriptions. (From the link: “My pain, it’s all from my waist down,” he said. “It’s like being boiled in oil 24 hours a day.”) Many pain doctors are getting fed up with idiotic politically motivated restrictions on their practice, which condemns many of their patients to endless suffering. Some pain patients have committed suicide after being cut off from their only source of relief. Mr. Lopez is making it sound like it's so very easy, like if we'd just prescribed less opioids we wouldn't have these problems. Not so. You're always going to have this false positives/false negatives trade-off. There isn't a simple "make fewer mistakes" lever. There isn't a magic "accurately identify appropriate candidates for opioids" button. Greater accuracy isn't an option. The people making the call (doctors) have the highest possible level of education. They possess the most information they could plausibly obtain about the patient's medical history. They are constantly doing continuing education for new trends in medicine. (I dearly hope they aren't looking to Vox for their information.) We can't descriminate more accurately on a systematic basis, we can only change the discrimination threshold. You can prescribe opioids more freely, knowing that a few more people who don't need them will get them. Or you can prescribe opioids more restrictively, knowing that more people who actually need them won't get them. A false negative is way more costly than a false positive here. We should be willing to tolerate a lot of false positives. The "downside" of being too permissive is that some people who use opioids because they enjoy them get to indulge their vice. [Edit 9/26/2017: I should clarify, I am not knocking either acupuncture or meditation for pain sufferers who feel like these treatments work. I would expect placebo treatments to be pretty effective for pain management. It's possible that the effectiveness of opioids is partially a placebo effect, too. I believe pain is one of the more subjective symptoms in medicine, and we should default to believing people who say they suffer from it. We should also default to believing people who say they have found a solution to it.]

It reminds me of journalists and economists who complain about all the "unnecessary medicine" provided in the United States, as if we can just categorize all medicine as "necessary" and "unnecessary" and then stop doing the unnecessary stuff. The problem is that these things are never certain. Nothing is "100% necessary" or "100% unnecessary." Rather, the best we can do is have some confidence level: "I'm 10% sure this is necessary" or "I'm 95% sure this is necessary." And then establish some sort of threshold, as in "We'll do everything that's at least 50% necessary." (This threshold should vary with the relative costs of false positives and false negatives, of course.) Mr. Lopez can correctly say that there is a lot of unnecessary opioid prescription, but he hasn't really given us a better means of discriminating "necessary" from "unnecessary", nor has he made the case that the threshold should be made more restrictive.

The Vox piece says:
And in other cases, the doctors involved were outright malicious — establishing “pill mills” in which they gave away opioids with little scrutiny, often for hard cash.
How "malicious" it is to sell something to willing buyers. The downside is that people who want to get high get to.

Another section of the piece starts with the title "Heroin and Fentanyl made the crisis much worse." Again, referring to it as the crisis incorrectly collapses heroin, fentanyl, and prescription opioids into a single problem. Lopez tries to connect this to opioid prescriptions, but I think he fails. From the piece:
A 2014 study in JAMA Psychiatry found 75 percent of heroin users in treatment started with painkillers, and a 2015 analysis by the CDC found people who are addicted to painkillers are 40 times more likely to be addicted to heroin.
If you read the study, it says nothing about the causal link between prescription painkillers and subsequent heroin use. Recall that there were 85 million prescription opioid users in 2015. If any of them subsequently become heroin users, they will be counted in the 75%. When I had some serious oral surgery done in 2001, I was prescribed some hydrocodone. If at any point in my life I become a heroin user, I will be counted in the 75%. [Edit 9/26/2017: This is incorrect; the 75% figure refers to past opioid abuse, not legal use. The first sentence of this paragraph holds up pretty well under this correction. According to the detailed tables of the SAMHSA survey from 2014, there were ~36 million prior non-medical users of prescription opioids. "Prior" meaning lifetime, not past year or month. Still a very large pool of people for the 75% figure to arise from.] This is much like the argument, popular among drug warriors, that marijuana is a gateway drug because most users of hard drugs start with marijuana. There are simply so many current and past prescription opioid users that most heroin users will probably have had past experience with these drugs, but that says nothing at all about a causal link. Indeed, the vast majority of prescription opioid users never go on to use heroin, so the causal link is dubious.
Although prescription opioid overdose deaths have really hit middle-aged and older Americans in their 40s and up, there’s evidence that heroin and fentanyl are much more likely to hit younger adults in their 20s and early 30s — creating a divide in the epidemic by age.
I'm not quite sure where he's going with this. For the record, the average age of someone who died of heroin in 2014 was 38. The average age of someone who died of prescription opioids in 2014 was 44. The users skew much younger than the average overdose death, implying that age is a huge risk factor in overdose deaths.

There are some good statistics in the Vox piece. Lopex points out that a huge fraction of opioid deaths (heroin and prescription painkillers) are really multi-drug interactions. I wish he'd have made a much bigger deal out of this. Take a look at my dissection of the 2015 overdose deaths (also linked to above). He refers to a couple of studies, but actually anyone can pull the data off the CDC website and do their own dissection. It's public information (although every death record is anonymous). If these deaths are multi-drug interactions, then the policy implications are much milder than "keep drugs away from people." Rather "let them have their drugs but remind them not to mix certain drugs" would suffice. "Get high, but do it this safe way..." is an easier sell than "don't get high." Only a small percentage of drug overdose deaths are single-substance overdoses (~14% of prescription opioid poisoning deaths, ~25% of heroin deaths, ~1% of benzodiazepine deaths). If we can get these people to stop mixing substances, we'd save a lot of lives. Lopez should have fixated his attention on this.

I'll step away from directly quoting and responding to the Vox piece and make a couple of observations.

Prescription Opioid Abuse Isn't Increasing!

There's all this talk about a prescription opioid epidemic. Gee, if only the government kept statistics on rates of drug use, broken down by drug category. Oh, wait, they do! Here is a figure from page 7 of the SAMHSA's Behavioral Health Trends in the United States: Results from the 2014 National Survey on Drug Use and Health (link).


It's basically flat. It's even decreasing for younger demographics. Opioid epidemic busted? Okay, so maybe "past month use" isn't indicative of overdose risk. Maybe it's just a subset of these who are really dangerous users who wind up killing themselves. If only they kept track of some other statistic indicating a more severe problem with prescription pain relievers. Oh, wait, they do that, too! Here is page 26 of the same document:


Once again, is't basically flat, and probably decreasing for the younger demographic. And just for good measure, here are some charts from the Monitoring the Future survey, which only covers 8th, 10th, and 12th graders. See page 31 of this document:

Admittedly this is a restricted age demographic, very different from the population of people who are actually dying from prescription opioids. But it confirms the notion from the SAMHSA survey that youth use rates are falling. And the "availability" question is telling. It appears to be falling, indicating that these substances are harder to get than a decade ago for this age demographic. (The discontinuity in the curve represents a methodology change where the survey question was re-worded, so interpret the charts with that in mind).

I have heard people dismiss the survey data. The argument goes something like, "If someone has a prescription, they're not going to count themselves as 'misusing.'" There's some kind of reporting error in the survey. People either don't even consider themselves to be "misusing" their opioids, or they know damn well they are misusing but are reluctant to say so on a government survey. I am sympathetic to this, but I find it really implausible that this would completely mask a trend of increasing painkiller abuse. Prescription opioid overdose deaths roughly tripled from 1999 to 2014, but the number of self-reported abusers is totally flat? Prescription opioid deaths basically flat-lined around 2010, presumably because of political responses to the so-called crisis. Wouldn't the "under-reporting" story imply that there should be an increase in self-reported painkiller misuse around this time? I could imagine under-reporting skewing the drug use statistics, but it's hard to swallow the idea that this reporting bias completely masks a trend, then also masks a flattening in that trend. I can buy that there's some reporting bias in these statistics, but it's hard to swallow the idea that the bias adjusts to hide any kind of movement in opioid use. Whatever kind of bias is being proposed by the people who dismiss the SAMHSA statistics, propose a mechanism for this bias and be consistent about how it works. And keep in mind Mr. Lopez actually cites and links to the SAMHSA summary. he somehow missed that the trendline for this statistic is flat, or he didn't see fit to share that with his readers.

Lopez points out that:
About 2.1 million people are estimated to have an opioid use disorder in America — and experts widely agree this is, if anything, an underestimate.
The link is to the 2016 SAMHSA survey summary report (my link above is to the 2014 version). Actually, that report says: "In 2016, an estimated 1.8 million people aged 12 or older had a pain reliever use disorder, which represents 0.7 percent of people aged 12 or older." That report, which I hadn't seen until just today, actually does not include the charts displaying the trend in prescription painkiller misuse and substance use disorders from 2002 to present (the charts shown above in this post). Why not? Why weren't those charts updated for the most recent report. Well, the 2014 version says, "The estimated 1.9 million people aged 12 or older in 2014 who had a pain reliever use disorder (Figure 31) represent 0.7 percent of the people aged 12 or older." The number of people with a substance abuse disorder regarding painkillers decreased by 100,000 people in the last two years. Is SAMHSA trying to disguise a decline in a widely publicized problem? Shame on them if they are.

I can find a similar duo of quotes about declining "past month use". The 2014 report says: "The estimated 4.3 million people aged 12 or older in 2014 who were current nonmedical users of pain relievers represent 1.6 percent of the population aged 12 or older." The 2016 report says: "An estimated 3.3 million people aged 12 or older in 2016 were current misusers of pain relievers, which represents 1.2 percent of the population aged 12 or older." Once again the chart is missing from the 2016 version of the report; if it were there it would show a sharp decline in past month painkiller misuse in 2016. Past month recreational use of prescription painkillers decreased by a million people, and the government is disguising this decline? My best non-cynical explanation for removing the charts is that 2015 was the first year that they started asking about "illicit painkiller use" and "any painkiller use" (previously they had just asked about illicit use). But then they should show the graph but have a footnote about the methodology change, like the Monitoring the Future report does. Failing to disclose this to the report's readers is just disgraceful.

By the way, the SAMHSA survey shows declining rates of cocaine use from 2006 to present, which is in line with declining numbers of cocaine overdose deaths. It also shows a probably real increase in heroin use, which once again corroborates the increase in heroin overdose deaths. So we can't just go dismissing the drug survey figures out of hand because "people lie about their drug use on government surveys." These surveys are apparently tracking some real trends.

Lopez might have taken this opportunity to at least remark on the statistics that contradict his narrative.

But, while were in the business of doubting government statistics...

Doubts About The Death Statistics 

There are many things that give me pause about the death statistics themselves. They could be overstated, or understated for all I know. Determining the cause of death is fundamentally a matter of opinion. If you read Karch's Pathology of Drug Abuse, the standard medical textbook on the topic, it's like every other sentence is a warning about assigning a drug-related cause of death. I urge Lopez and other curious readers to pick up a copy and read it thoroughly. See several excerpts from the chapter on opioids here and from the chapter on cocaine here. Also, see comments that  Steven Karch made for Radley Balko's great series in the Huffington Post on this topic here. From the textbook:
Not one single control study, even in animals, has ever shown that postmortem drug concentrations accurately reflect drug concentrations at the time of death, but a goodly number have shown quite the opposite to be true, chiefly because of the problem of postmortem redistribution (Pounder et al., 1996; Hilberg et al., 1999; Moriya and Hashimoto, 1999; Drummer and Gerostamoulos, 2002; Flanagan et al., 2003; Ferner, 2008). Postmortem redistribution is defined as the movement of a drug down a concentration gradient after death.
This is just one quote of many. It's like every other line he's reminding the reader, "Hey, it's really hard to determine the cause of death. You can't simply do it based on postmortem drug or metabolite concentrations, which unfortunately is standard practice." Seriously, read through it. Parts of it are feel like they are from a text on the philosophy of causal inference. I suspect that, with a lot of people on high-dosage opioids walking around, a few of them randomly drop dead from other causes and get labeled a "drug overdose" by an unwary medical examiner. The examiner might be ignorant or simply busy and has a handy explanation that allows him to move on. It's certainly the case that people who die of drug overdoses have a lot of other illnesses and medical problems, which end up on the death certificate. This indicates that some other causes of death contributed, or perhaps were actually the primary cause. (If an opioid user wouldn't have died but for their sleep apnea, which cause is "primary"? It's almost a philosophical question. But sheer navel-gazing aside, there are also implications for who should and shouldn't get opioids if these other illnesses are overdose risk factors.) The people who die of drug overdoses also tend to be older than the using population in general. It's likely that a lot of these people are sick or old and die for reasons other than their opioid prescription, but the handy explanation is too easy to pass up. And there is plenty of other evidence for a spurious trend in drug overdose deaths: categories that were empty in 1999 but populated in 2014, the ICD-9 to ICD-10 changeover in 1999, the promiscuous use of the generic drug overdose category (as in they couldn't actually blame the death on a particular substance), etc. I'm not suggesting that the entire trend is spurious, just that some proportion of it is not real.

This is one of my pet peeves about these opioid epidemic stories: taking the death totals at face value. A body is a body; there is  no doubt that these counts represent people who actually died. But the cause of death is always in question. If thousands of these deaths have the wrong cause of death assigned to them, then we will draw incorrect conclusions if we simply add them up and take them at face value. People like to have facts and numbers to support their story. That's understandable. Statistics feel a lot like facts, immutable nuggets of unimpeachable truth. But if the underlying data are bad, any summarizing, averaging, trendline-fitting, regression analysis, or other statistical magic will give you garbage. I wish people would be a lot more skeptical about these death figures.

Alternative Narrative

My simple story is this: there is some very low probability of overdosing on prescription opioids. The tonnage of opioids prescribed roughly tripled from 1999 to 2015, and so did the number of opioid overdose deaths. The risk per legal prescription did not change. Contrary to the standard narrative, we did not see an increase in illicit use of painkillers despite this massive expansion in their legal use. We have more people exposed to a particular risk, a risk whose magnitude did not change over the past decade and a half. It's as if we were doing a certain surgery, which always carries some trivial cause of death, three times as much as we were in 1999. Then somebody tabulated some statistics and showed that, OMG, death rates from that surgery have also tripled! The surgery needs to be evaluated by the following criterion: Is the risk worth the benefits for the individual undergoing surgery? Summing the deaths from surgery complications across 300 million people simply does not give you a statistic that is relevant for public health policy. If the surgery is deemed worthwhile, it doesn't matter if the "death total" is large, or if it's increasing or decreasing over time. The surgery is either worth doing at the individual level or it isn't. There were about 0.5 deaths per kilogram of opioid prescribed in 1999, and there were about 0.5 deaths per kilogram of opioid prescribed in 2014. This is an unchanging risk being applied to a larger population.  Lopez tries to make the case that the expansion of prescription opioids was unnecessary, but he ultimately fails to make this case. Plainly a lot of chronic and acute pain sufferers are better off with prescription opioids, and neither Vox nor anyone else has established a sorting mechanism better than doctor's judgment for deciding who does and does not get these drugs.

The recent spike in heroin deaths is related to very cheap "heroin". The price drop is due to drug dealers spiking their heroin with fentanyl and stronger synthetic opioids. These opioids are sometimes hundreds of times stronger than heroin and are poorly mixed into batches of drugs that get sold as heroin. So people take something far stronger than they intended and end up overdosing. The increase in heroin deaths showed up very late in the game. The increase in synthetic opioid deaths showed up even later. I would have expected to see heroin deaths increasing steadily since 1999 if prescription painkillers were turning people into heroin addicts, but it's flat from 1999 to 2007 (green line below). The causal link here is dubious. See the trends for different drugs in the chart below:


A much fuller exploration of alternative narratives here. My basic explanation is that the prescription opioid overdoses aren't mostly coming from addicts with serious drug problems (though clearly some of them are). They're mostly coming from normal people with legal prescriptions who occasionally do something incautious, like take painkillers with alcohol or benzodiazepines or imprudently take more than the recommended dosage. It's a boring story without a bad guy, and it denies the news-consuming public their craving for a good drug scare. But in my opinion it's the most likely explanation. Not every social problem is a moral panic with an identifiable villain.

I certainly don't disagree with everything in the Vox piece. Sure, make naloxone, the antidote for an opioid overdose, more available. Sure, spend some money on drug treatment programs. (I'm skeptical...do these drug treatment programs even work?) But the basic underlying narrative is simply mistaken.