Wednesday, May 25, 2016

Propublica’s “Machine Bias” Article is Incredibly Biased

[If you find yourself getting angry at any of what I’ve written below, please read to the end. I am not rendering a verdict on the racial bias/fairness of the overall criminal justice system. I am making a pretty narrowly focused critique of an article that recently appeared on Propublica regarding the use of predictive modeling in criminal justice. My expertise, and the topic of this commentary, is predictive modeling, not criminal justice. Although I certainly have opinions about both.]

I read this Propublica article on the use of statistical modeling to predict the recidivism of criminals, and I found it to be incredibly misleading.

To their credit, the authors make their methods and even their data incredibly transparent and link to a sort of technical appendix.

The thrust of the article is that the statistical methods used to predict recidivism by criminals are biased against black people, and that attempts to validate such statistical methods either haven’t bothered to check for such a bias or have overlooked it. I smelled a rat when the article started making one-by-one comparisons. “Here’s a white guy with several prior convictions and a very low risk-score. Here’s a black guy with only one prior and a high risk-score. It turns out the white guy did indeed commit additional crimes, and the black guy didn’t.” (Paraphrasing here, not a real quote.) This absolutely screams cherry-picking to me. I don’t know exactly what else went into the risk scoring (a system called COMPAS by a company named Northpointe), but surely it contains information other than the individual’s prior criminal record. Surely other things (gender, age, personality, questionnaire responses, etc.) are in their predictive model, so it’s conceivable that someone with a criminal history but otherwise a confluence of good factors will be given a low risk score. Likewise, someone with only one prior conviction but a confluence of otherwise bad factors will get a high risk score. After the fact, you can find instances of people who were rated high-risk but didn’t recidivate, and people who were rated low-risk bid did recidivate. These one-off "failures" of the model are irrelevant. What matters is how well the predictive model does *in aggregate.*

I build predictive models for a living. I don’t know exactly what kind of model COMPAS is. My best guess is that it’s some kind of logistic regression or a decision tree model. (Some of these details are proprietary.) The output of these models isn’t a “yes/no” answer. It’s a probability. As in, “This person has a 30% chance of committing a crime in the next 2 years.” You can’t measure the performance of a predictive model by looking at individual cases and saying, “This guess was correct, this guess was incorrect….” You have to do this kind of assessment in aggregate. To do this, one would typically calculate everyone’s probability of recidivism, pick some kind of reasonable grouping (like “0-5% probability is group 1, 6-10% probability is group 2…., 95-100% probability is group 20”), then compare the model predictions to the after-the-fact recidivism rates. If, for example, you identify a grouping of individuals with a 35%  probability of recidivism, and 35% of those individuals recidivate, your model is pretty good. You aren’t going to build a model with a sharp distinction like “This guy *will* recidivate and this guy *won’t*.” Most likely you will get probabilities spread out fairly evenly over some range. You could, for example, get an equal number of individuals at every percentile (1%, 2%, 3%, and each integer % up to 100%). More often with models like these you get something that is distributed across a narrower range. Perhaps everyone’s statistically predicted recidivism rate is between, say, 30% and 60%, and the distribution is a bell-curve within that range with a midpoint near 45%. You don’t typically get a bimodal distribution, with people clustering around 1% and 99%. In other words, a predictive model doesn’t make clear “yes/no” predictions.

I don’t see any understanding of these predictive modeling concepts in the piece above. Nothing in it indicates that the author is competent to validate or judge a predictive model. In fact, when it says things like “Two years later, we know the computer algorithm got it exactly backward.” and “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” and “the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”, it betrays a real ignorance about how predictive models work. To be fair, the page describing the analysis is far more nuanced than the article itself. Something got lost in translating a reasonable and competent statistical analysis into an article for the general public.

The bottom line for this piece is the “false positive/false negative differences for blacks and whites” result. See the table at the bottom of the main piece. If we look at black defendants who recidivated, 27.99% were rated low risk (the “false negative” rate); this number is 47.72% for whites. The authors interpret this as (my own paraphrase): the algorithm is more lenient for whites, because it’s mislablelling too many of them as low-risk. If we look at black defendants who did not recidivate, 44.85% were rated high-risk (the “false positive” rate); this number is 23.45% for whites. The authors once again interpret this as the algorithm being lenient toward whites, since it is more likely to mislabel non-recidivating blacks as high-risk. At first this really did seem troubling, but then I looked at the underlying numbers.



Black defendants were more likely than white defendants to have a high score (58.8% vs 34.8%), but that alone does not imply an unfair bias. Despite their higher scores, blacks had a higher recidivism rate than whites for *both* high and low scoring populations.  Blacks with a high score had a 63% recidivism rate, while high-scoring whites had a 59.1% recidivism rate. The difference is even bigger for the low scorers. Anyway, I’m willing to interpret these differences as small and possibly statistically insignificant. But it’s pretty misleading to say that the scoring is unfair to blacks. The scoring is clearly discriminating the high-recidivism and low-recidivism populations, and its predictive performance is similar for whites and blacks. I think the “false positive/false negative” result described in the above paragraph is just a statistical artifact of the fact that black defendants, for whatever reason, are more likely to recidivate (51.4% vs 39.4%, according to Propublica’s data). It’s almost as if the authors looked at every conceivable ratio of numbers from this “High/Low score vs Did/didn’t recidivate” table and focused on the only thing that looked unfavorable to blacks.

(It should be noted that this analysis relies on data from Broward County in Florida. We can't necessarily assume any of these results generalize to the rest of the country. Any use of "black" or "white" in this post refers specifically to populations in this not-necessarily-representative sample.)  

It’s kind of baffling when you read the main piece and then see how it’s contradicted by the technical appendix. In the technical details, you can clearly see the survival curves are different for different risk classes. See the survival curves here, about 3/4 of the way down the page. In a survival curve people drop out of the population, in this case because they recidivate (in a more traditional application, because they literally die). The high-risk category is clearly recidivating more than the medium risk category, and the medium more than the low. If you compare the survival curves for the same risk category across races (e.g. compare high-risk blacks to high-risk whites), you can even see how blacks in the same risk category have a slightly higher recidivism rate. Contra the main article, this score is doing exactly what it’s supposed to be doing.


Sorry if this all seems terribly pedantic and beside the point. I even find myself saying, “Yes, yes, but there really *is* a serious problem here, accurate scores or not.” I’m definitely sympathetic to the idea that our criminal justice system is unfair. We police a lot of things that shouldn’t be policed, and we fail to adequately police things that *should* be policed. Clearance rates on real crimes, like theft, murder, and rape, are pathetically low, while non-crimes like drug use and drug commerce are actively pursued by law enforcement. If these scores are predicting someone’s tendency to commit a drug offense, and such a prediction is used *against* that person, then I will join the chorus condemning these scores as evil. However, it’s not fair to condemn them as “racist” when they are accurately performing their function of predicting recidivism rates and sorting high- and low- risk individuals. I also don’t think it’s fair to condemn the entire enterprise of predictive modeling because “everyone’s an individual.” The act of predicting recidivism will be done *somehow* by *someone*. That somehow/someone could be a computer performing statistical checks on its predictions, basing its predictive model on enormous sample sizes, and updating itself based on new information. Or it can be a weary human judge, perhaps hungering for his lunch break, perhaps irritated by some matter in his personal life, perhaps carrying with him a lifetime’s worth of prejudices, who in almost no one’s conception performs any kind of prediction-checking or belief updating. Personally, I’ll take the computer. 

Tuesday, May 24, 2016

Is There a Spurious Trend in CDC Drug Poisoning Data?

There is a great deal of alarming coverage of the recent rapid increase in drug poisoning deaths. (These are sometimes mistakenly called “overdoses”, even though most are drug interactions, and many are normal doses of a drug interacting with infirmities of unhealthy individuals.) Is it possible that this is just another spurious trend, like we see in many time series? Few serious people think that the number of total all-cause deaths (about 2.6 million per year in the US) is greatly miscounted. A body is a body, and most dead people leave one behind. But it’s certainly possible that the cause of death is misattributed for a large number of those bodies.

ICD-9 to ICD-10 Changeover

The coding system for causes of death suddenly changed from ICD-9 to ICD-10 in 1999. Very suspiciously, this is exactly when drug poisoning deaths begin to rise. Of course this could be a coincidence. Maybe the deaths are coded accurately under the old and the new coding structure, and the increase in drug poisonings happened to coincide with the changeover. After all, the biggest driver of the trend, prescription painkillers, saw a surge in prescriptions around this time. Still, it’s worth considering how a changeover to a new coding system might distort the numbers.
If someone told me that a coding change introduced a bias into death totals, I would probably expect a one-time jump (up or down, in this case up) to happen the year of the changeover. Then again, I could imagine that the transition to the new codes takes a few years to get used to. As people realize, “Oh, there’s a code for that!” perhaps the code gets used more often. And if people catch wind that there’s a rise in prescription painkiller deaths, maybe they become more likely to code a given death that way. It’s not what I would have expected and I’m not saying that’s what’s happening, but it’s a plausible story.

Trends in Other Substances

I’ve shared this before, but here’s a refresher. If you look at the most lethal categories of drugs, you see the following: 



Many of the trends here are consistent with information from elsewhere. The increase in “Other Opioids” deaths follows roughly the same trend as the increase in the sheer tonnage of opioids prescribed (see the graph at the bottom of this page). The decrease in cocaine deaths after 2006 coincides with a drop in self-reported use on drug use surveys (see page 10 of the SAMHSA drug survey here). The drop in methadone deaths after 2006 coincides with increased awareness of the hazards of methadone and a re-labeling of the warning on the package (see here). The increase in heroin-related deaths coincides with an increase in heroin use on drug surveys (page 11 of this document). And the 2014 spike in “Other synthetic narcotics” coincides with media reports of fentanyl being sold as heroin (I suspect many of the “heroin” overdoses are actually fentanyl overdoses.) So at first blush there is a real signal here. But then again, there are trends that aren’t so easily explicable. The increase in “Psychostimulants with abuse potential” is of the same magnitude as other drug categories, but there’s no readily available explanation for this because the illicit use of these substances is down over the past 15 years (see SAMHSA survey I linked to for the heroin and cocaine figures, page 13, for the exact trend; this category of drugs includes methamphetamine and prescription ADHD drugs like Adderall).

If we take a look at some of the less lethal categories of drugs, we see the same rise, even though (once again) there’s no ready explanation for the rise.



See the data table below. There are very large factor increases for some of these drugs, even though there is no ready explanation for the increase.



Maybe the trend of increasing painkiller poisoning deaths is real, but these other drugs aren’t actually contributing to the cause of death. These figures, from the CDC’s Wonder database on multiple causes of death, overcount death totals if you start adding together drug categories. A single death will be counted in, say, the “Other and Unspecified antidepressants” category *and* the “benzodiazepines” category if both types of drugs were listed on the death certificate. So maybe the trend in prescription painkiller deaths is real and it’s introducing a spurious trend in the others? This story doesn’t quite work, because even if I filter out all the deaths that involve the top 7 most lethal drug categories (Other Opioids, Heroin, Benzodiazepines, Cocaine, Other synthetic narcotics, psychostimulants with abuse potential, and methadone), I get the following count of deaths: 



(Apologies for the incompleteness of this table, but this table requires opening the full database of all 2.6 million deaths. It then requires filtering out every single death that lacks one of the cause of death codes for one of the seven substances mentioned above, which in turn requires checking each of the up to 20 cause of death codes on each death record. Taxing, even with the help of  Microsoft Access and Excel. This is somewhat labor intensive, so I only pulled the even years. It's  not something you can pull from the CDC's Wonder database. However, I'd be very surprised if anything substantially different happens in the odd years.)

All these “miscellaneous” drug categories contribute to a death total that increases by a factor of 2.3. Suppose I look at the catch-all category “Other and Unspecified drugs, medicaments, and biological substances,” specifically deaths that *only* include this drug-related cause of death and no others. It’s safe to say there was a great deal of uncertainty in what actually caused these deaths, given that nobody assigned a specific drug category as the cause of death. Anyway, these increased from 2,449 in 2000 to 7,541 in 2012 (once again, I only pulled the even years for this data). This is a factor of three increase in a miscellaneous “We don’t know what the hell happened to this guy” category. This screams “reporting bias” to me. Maybe we’re doing more toxicology screenings than we used to, or maybe medical examiners are more willing to assign “accidental drug poisoning” (ICD10 codes X40-X44) as a cause of death than they were 15 years ago. For whatever reason this is creeping up over time. It’s possible that every single category of drugs is being prescribed at ~3 times the level it was being prescribed in 1999, but I’ve seen no indication of any such increase across the board. Or maybe these are all being increasingly misused by people with drug abuse problems, but that story doesn’t make sense because (again refer to the SAMHSA survey) non-medical use of prescription drugs is flat (even down slightly, dramatically down for youths).

Trends by State

Another serious problem with the drug poisoning trends arises when one looks at a table of drug poisonings by state and by drug type, for 1999 and then for 2014. There are many categories that have zero deaths in them in 1999 and then suddenly a very large number (dozens or hundreds) of deaths in them in 2014. For example, there are zero “Other and Unspecified antidepressants” deaths in Michigan in 1999 and suddenly there are 149 in 2014. Is this increase real, or were they just not looking for this 15 years ago? If I count up all instances like this (where there were zero deaths in 1999 and there were some in 2014), I get a total of 13,368. This number is subject to the double-counting described above, because a death can overlap several drug categories and the WONDER Multiple Cause of Death database counts these multiple times. There are very few going the other way (zero in 2014 but something positive in 1999); I’m only counting 353 deaths that match this description. If the number were small but nonzero in 1999 and then rose by some large factor, I’d believe it a lot more than if it rose from zero to something in the tens or hundreds. A zero looks extremely suspicious. In fact, there are entire states that have zero overdose deaths (supposedly, according to the CDC’s data) in 1999, but then have some in 2014. I’m counting zero in Iowa, Wyoming, and Montana in 1999 and all these states have dozens or hundreds of deaths in 2014. (I’m counting multiple cause of death codes that start with “T4”, so my counting will probably not match a tabulation that relies on the underlying cause of death codes, the ones that start with “X4”.) And some drug categories had zero deaths from *all* states in 1999 and then had some in 2014; there were zero deaths from “antitussives” in 1999 and then 181 deaths in 2014. I see no drug categories with the opposite pattern, and in fact very few that are decreasing at all. This bias is clearly in one direction: overstating poisoning deaths in more recent years while understating them in past years. 

I wish it were possible to say with certainty, “Yes, there is a spurious trend in the CDC’s drug poisoning statistics, it has an upward bias, and X% of the trend is spurious with the remainder being real.” Unfortunately the problem is much too fuzzy for anyone to do this. Even though we can’t accurately correct the data for these kinds of biases, we can caution the users of the CDC’s death data. Any policy conclusions drawn from these data ought to be treated with skepticism.

I don’t mean to imply with any of this that the recent increases in prescription opioid, benzodiazepine, and heroin deaths aren’t real. I believe that these are real, if perhaps overstated, trends. The underlying trends in rising opioid prescriptions is real enough, and so there should be a larger population exposed to this (mild) risk of mortality. If there are more opioids in the population, they would naturally be more likely to bump into other medications like benzodiazepines, or with alcohol, which can cause a fatal drug interaction. But we should probably stop pretending that we know the size of the increase with any level of precision, because we simply don’t. When the CDC publishes those charts that show a dramatically upward-sloping line of prescription opioid deaths, it should put a giant asterisk next to it with a footnote saying, “Nobody actually knows the true slope of this line.” 

Sunday, May 22, 2016

Bathroom Controversy

The recent fight over who gets to use which bathroom is the latest battle in an obnoxious culture war. It’s a war that doesn’t need to happen. Stop trying to get the government to adjudicate every little conflict. Half the time they’re going to render a verdict you don’t like. I think it’s unfortunate when the government sticks its nose where it has no rightful business, but I can’t help but feel a little schadenfreude for the loser. If you wanted the government to resolve a conflict for you and the resolution isn’t the one you wanted (perhaps it’s the opposite of what you wanted), you were kind of asking for it. We shouldn’t have to explicitly articulate a principle of Separation of Bathroom and State; in a world of mature adults these issues would just never come up in the first place.

It seems to me the buildings’ owner should decide the bathroom rules. If you don’t like someone’s bathroom rules, don’t enter the premises. Or do so anyway, and if you pass for the opposite gender you can use the bathroom that calls the least attention to yourself. I think that this Iron Law of the Bathroom, “Don’t call attention to yourself or to anyone else,” is actually the one that always rules regardless of what the written law is.

Conservatives need to be more tolerant towards “weird people.” Remember, they think *you’re* the weird one, and they’re just as right as you are.

And leftists are doing tolerance wrong. They are trying to force everyone in society to mix together, even though some groups of people don’t like each other and never will. We aren’t all going to join hands and sing just because you force us into the same space. You need to give people the right to cordon themselves off. If someone wants to carve out an enclave of society, perhaps one where you’re not welcome, perhaps one that you believe is culturally backwards, you should let them.  Stop forcing people to bake cakes they don’t want to bake. (What’s that word for “forced servitude”? I forgot.) And stop dictating how people use their private property. (The word for “taking something that doesn’t belong to you” also escapes me at the moment. Sorry guys.)

“But,” you say, “some of these buildings are owned by the government. Schools, for example. So the government can’t simply extricate itself from this matter.” Fair point. This is indeed a weakness of having too many of our institutions in the public sphere. Irreconcilable differences can’t be reconciled, because everyone collectively owns a share of these public institutions. This is a bug, not a feature, of the institutions you (probably) support, and you’re a bigger person for admitting it. Apology accepted.


Sorry, that last paragraph was unnecessarily snarky. But please take my point about the inherent weakness of public institutions. If we all collectively own them, it becomes impossible to resolve some conflicts. This isn’t true in the private sphere, where you can simply leave any employer, retailer, church, club, friendship, or any other institutions you don’t approve of. 

Minimum Wage – Which World Do We Live In?

Proponents of the minimum wage need to carefully consider what the current state of the world is. Many of the arguments for raising (or even for keeping) the minimum wage are really pretty sloppy. Consider two possible states of the world:

1)      Employers easily have enough money (in the form of profits or something) to pay the higher wages to low-wage workers.

2)      Employers don’t just have the money lying around; they’d have to raise prices. A minimum wage is necessary because all employers have to raise prices together, or else no one will.

Or consider a related pair of mutually exclusive possibilities:

1)      Low-wage employees actually create value much higher than their wage for society, and employers unfairly capture much of that value for themselves.

2)      Low-wage employees earn a wage that is commensurate with the value they create. Legally increasing this wage means forced charity, paid by customers and employers to employees.

Think about the first pair for a moment. If 1) is true, then you should see a large excess profit for businesses that hire a lot of low wage workers. Entire industries that rely on low-wage labor should also have very high profit margins. They don’t. Wal-Mart, that famous bully of low-skilled workers, has a profit margin in the 2-4% range over the past ten years. Look them up. You’ll find similar results for fast food, an industry with a ~2-3% profit margin. (see here). These companies are mostly earning pretty slim profit margins. If they were underpaying for a major expense (probably the largest or second largestsingle expense), you’d expect them to be making a killing. If they are underpaying their workers, it’s by a tiny amount. To pay the higher wages, you’d have to do some combination of raising prices, laying off workers, making the remaining workers more productive (“cracking the whip” more, so to speak), and removing workplace amenities that make life on the job tolerable. It is sometimes suggested that raising prices is a no-brainer, but I seriously doubt this argument. If prices rise to pay for a higher minimum wage, then low-wage workers will see some of their higher wages eaten up by higher prices. We have to be a little more precise than that; perhaps you can work it out so that low-wage workers benefit on net and higher-wage workers subsidize their increased pay. Perhaps the proportion of living expenses going towards goods produced by low-skilled labor is so small that the wage increase will be much larger than the living expense increase. But you have to do this with an explicit model and actual data, not simple hand-waving, which is what I usually see in this track.

Now think about the second pair. Suppose 1) is correct. If low-wage employees are capable of creating value that is tremendously higher than their wage, they should be able to do without their employers and sell their labor on the open market. You should ask why you don’t see much of this, or why you don’t see massive improvements in earnings for the rare individuals who try it. Alternatively, these workers should be able to form a competing company that takes all the good workers by offering a slightly higher wage. The stock owners and highly-paid management, who are supposedly capturing this excess of value added minus wages, should be willing to earn slightly less than a competitor earning the full excess value, and another competitor’s leadership slightly less still, and so on until the excess profits are competed away. Any opportunity to earn excess profits should be competed away as competitors try to grab a slice of it. If this doesn’t happen, it’s at least a mystery that needs to be expounded upon.

Perhaps it’s not the stockholders and executives who are capturing the excess value, but rather the customers. This view would be more consistent with the claim that we can “simply raise prices” to cover the higher minimum wage, and it’s consistent with the low profit margins *actually* experienced by employers of low-wage works. However, there is still a mystery here, because something is being sold for well below what it’s worth in a deep market with lots of participants. You could make some weird “we’re stuck in a bad equilibrium” argument, but that makes little sense because such an equilibrium would not be stable. If workers are capable of generating $15 worth of value an hour and are only getting paid $7.25 in some industry, they should leave that industry for better paying ones, and the low-balling industry should raise its wage until it attracts back the workers it needs. Markets abhor a mispriced commodity just as nature abhors a vacuum. You have to somehow explain why a vacuum isn’t getting filled in, and your explanation has to be consistent with real-world observations.

So in the second pair of possible states of the world, if 1) is implausible then 2) must be the case. The case for a minimum wage is still salvageable, but it really undercuts the moral case for the minimum wage when you admit that you’re forcing someone to pay charity to someone else. It at least implies that we need to stop haranguing the employers of low-skilled workers; it’s not them but their customers who are capturing that (supposed) excess value. It’s harder to bluster that “The worker is getting screwed by the big players!” when you admit that they are basically earning the equivalent of the value they create for their employers. In this world, where employers *aren’t* earning a big surplus on their low-wage workers, the higher wages must be paid by customers. Keep in mind that higher prices means customers will buy less (of whatever commodity we’re talking about), and the result will be less demand for the workers in that industry. If minimum wages are forced charity, let’s admit that they are forced charity, and let’s consider fairer and more efficient ways to redistribute money to low-wage workers. Maybe an enhancement of the earned income tax credit, or a direct subsidy of low-wage labor, or a minimum income guarantee. Whatever we’re doing to help the poor, it should be on-budget and be paid for by everyone, not just by customers (or employers) in a few industries. We ought to stop dicking with the labor market and address distributional issues directly.

My own view is that wages are mostly fair, for low wage workers and for those much-reviled highly paid executives. Employers of low-wage workers actually enhance their (the workers’) earning power by supplementing their labor with enormous amounts of capital. This capital includes machinery, buildings, shelves for stocking consumer goods, vehicles, and less tangible things like brand quality and market expertise. Highly paid executives actually *increase* the wages of low-wage workers by making them more productive; a company that failed to shell out for talented management would go out of business or slog along inefficiently. Those claims that high executive pay comes at the expense of the workers have it backwards; those workers wouldn’t earn as much without those highly-paid executives. We aren’t stuck in some weird equilibrium where workers are underpaid and employers or customers are capturing the excess value, because such an equilibrium would be extremely unstable. Profit margins are pretty slim, so it’s hard to make the case that employers are profiting from exploiting these laborers. You can take or leave some of these observations, but whatever policy you favor for helping poor laborers needs to keep these things in mind. We could have a much more vibrant and active market in low-skilled labor. I think it would be possible for someone to make a living doing a lot of one-off tasks for less than the current minimum wage. Many people who are on the margins of society could be better off than they are now; picking up $20 today doing some menial tasks is better than being completely unemployed and earning nothing. That option has been legally forbidden for many people.


There are tasks worth doing that don’t return $7.25 in value for each hour of labor spent doing them. Minimum wage laws serve to ensure that no coordinated effort to do them will ever happen. It's a shame, because there are a lot of these low-value tasks and a lot of unemployed low-skilled labor in the world. If only we could bring them together somehow.

Monday, May 16, 2016

"Opioid Epidemic" - The NYT Editorial Board Joins the Herd

The New York Times gets it wrong on the opioid “epidemic:”

http://www.nytimes.com/2016/05/16/opinion/congress-wakes-up-to-the-opioid-epidemic.html

Every major media coverage of this story gets a lot of important details wrong. Every single one omits vital facts that undermine the story. Frankly I’m getting frustrated with this nonsense. Here’s how it opens:

The opioid epidemic is now a leading cause of death in the United States, ravaging communities across the country.

Huh. “A” leading cause? Note the indefinite article “a” rather than “the.” It is most certainly not “the” leading cause, unless you fixate on some tiny sub-demographic of age/race/geography where it does happen to be the leading cause. There were 2.6 million deaths from all causes in the United States last year. The leading causes of death are heart disease and cancer, and these beat drug poisonings by a very wide margin. You have to do some serious cherry picking in the “young adult” demographic to find a subcategory where drug poisonings top the list.

Opioids, a category of drugs that includes heroin and prescription painkillers like oxycodone, killed more than 28,000 people in 2014, and the rate of overdoses has tripled since 2000, according to the Centers for Disease Control and Prevention.

The 28,000 number is combining heroin and prescription opioids (which to the NYT’s credit is clearly stated). This combination of different substances reeks of bad faith. It's adding together different things to get an inflated (thus more shocking) number. These are very different problems with very different causes. Heroin use is a fairly clear indicator of a drug problem or some kind of risky behavior, but many (I would claim *most*) of the prescription opioid poisonings are due to drug mixings by patients with legal prescriptions who *don’t* have an addiction or drug-abuse problem. A very large proportion of these drug poisonings are mixtures of opioids and benzodiazepines or alcohol. A simple stern warning not to mix these drugs would likely prevent a large number of these deaths, but the NYT has failed to meaningfully inform its readers on this point. That’s a real shame, because unlike policy recommendations, “never mix X and Y” is advice that a reader can  actually use. The very next sentence is:

Almost two million Americans abused or were dependent on these drugs in 2014.

A number which hasn’t risen significantly since 1999, when opioid prescriptions and opioid deaths began to rise. The NYT editorial board might have shared this information with its readers, because it completely undermines the rest of the piece. If you can quadruple the number of prescriptions (and the sheer tonnage of opioids consumed) and *not* get any more addicts, then apparently these drugs are a lot safer than these scare stories imply. By the way, the SAMHSA survey that the 2 million number comes from also shows a breakdown of the source of prescription painkillers. 53% of respondents said they got them “free from a friend,” which is not a reliable supply for a hard-core addict. Two million sounds like a lot, but it’s 0.7% of the population. And (also from the SAMHSA survey) there are about three times as many people who use these drugs every year without ever experiencing an abuse disorder. Maybe two million painkiller addicts is a big deal, and perhaps the treatment programs this piece is advocating for would be worthwhile. But it’s not a growing problem and it can’t be attributed to the increasing volume of prescriptions.

The country is facing a health emergency, and it would be tragic if a self-imposed budget rule got in the way of a robust federal response.

More pointless hyperbole.

The federal government can make the biggest difference by expanding high-quality treatment programs.

Fair enough. I don’t particularly object to this. However:

States, which have more sway over doctors and hospitals, need to do more on the prevention side by placing limits on opioid prescriptions.

This will likely lead to *more* overdoses. Limiting the legal supply will drive many addicts to the black market. Limiting the legal supply is what makes heroin and cocaine so deadly. Let doctors decide which cases are appropriately treated with painkillers and which ones are appropriately addressed by some other treatment. Let’s not allow police, bureaucrats, or the NYT editorial board to interfere in decisions where they lack the relevant expertise. A bland executive directive to "decrease the volume of opioids prescribed" is likely to backfire and cause more deaths.

“Many people become addicted because of being prescribed an inappropriate amount of opioids or for too long,” said Gov. Maggie Hassan of New Hampshire.

A politician joins the chorus of scare-mongers. Not surprising. Once again, the increase in opioid prescriptions hasn’t appreciably increased the number of illicit users *or* the number of addicts.

This editorial is like so many others I see on the opioid “epidemic.” It’s filled with some true facts and accurate numbers, but also some glaring, inexcusable omissions. The facts are held together with a mistaken narrative. It’s incredibly lazy reporting.

My take is very different from the dominant narrative: We have a useful treatment for pain management in the form of prescription opioids. As with almost any medical treatment, there is some probability of death given the treatment. That risk has not changed at all: a quadrupling use of the treatment has quadrupled the number of deaths, roughly speaking.  It is bad public health policy to simply count up total deaths and worry when the number gets big. A total number of deaths (which, trivially, would quadruple if the population quadrupled and nothing else changed) is an irrelevant public health consideration. The relevant question is: What is the acceptable risk for a given treatment? If opioid use for pain management was an acceptable treatment in 1999, it’s an acceptable treatment today, because the risk per prescription hasn’t changed appreciably.


What bothers me most is that media outlets fail to inform their readers in a way that might *actually* reduce the number of drug poisonings. Only ~20% of prescription opioid deaths involve just one substance; most of these are multi-drug interactions, with benzodiazepines and alcohol being the biggest offenders. Warning the reader against potentially dangerous drug interactions would go a long way. Heroin poisonings, by the way, also usually involve multiple substances. Of all heroin poisonings, heroin *by itself* is the culprit only about 30% of the time (a true “overdose,” as opposed to a multi-drug poisoning). It might help to stop calling these "overdoses" and clarify that there are dangerous combinations of drugs, even each individually is ingested in a safe dosage. 

Sunday, May 15, 2016

An Appreciation of Negative Role-Models

I want to give a big, heart-felt “Thank you” to all the negative role-models who have influenced my life. The thought of doing something anti-social does occasionally occur to me, sometimes even reaching the level of “temptation.” If it weren’t for you, my accidental moral tutor, I wouldn’t have a shining example indicating “This goes on the ‘Not Do’ list.”

For example, the thought of losing my temper, yelling at my wife, and calling her stupid (all in public) would *never* occur to me. But if for some reason it did, I’d simply recall your example. I might say, “Ah, *that’s* how it would look. I’m not going for ‘boorish oaf,’ so I’ll pass.”

Suppose I were tempted to air dirty laundry about a family member or loved-one on social media. Well, I’d once again be checked by my voluminous library of negative role-models. I’d immediately think better of it. As in, “Ah, that would just make *me* look stupid. And it would backfire, because everyone would feel sympathy for them and contempt for me.” Thanks for showing us all how bad this looks so that none of us who have seen your example will ever do it!

Let’s say I wanted to post a thinly-veiled sexual innuendo to an attractive female friend’s profile-pic. Or openly discuss my vices of questionable legality on social media. Or procrastinate on an important project or decision at work while browsing the web all day (or even all week). Suppose I were to greet every stray thought that crossed my mind with, “That’s a good one! I’ll broadcast it to the world!” Suppose one day I decided, “I’ve had it backwards. I’ll drink first, *then* go to work.” Suppose I were to hit my kids, not as a controlled method carefully meted-out of discipline, but because I can’t control my temper. If I were ever to think, “I’ll just burn down this old bridge to my former employer in the most offensive way imaginable,” I’d have an easily referenced case study on exactly why that’s a terrible idea. I think my list of "things not to do" would be pretty informed by common sense and logical reasoning, but a steady stream of ugly real-world examples have made certain behaviors unthinkable. 


I have been blessed that the negative role-models in my life are distant enough from me that they do not harm me directly. The anti-social behaviors and bad decisions described above do real harm, particularly to those poor souls close to the person exhibiting these behaviors. I feel pity for their victims and for the responsible adults who have to take their shit and mop up after them, but I hope the rest of the world is with me in observing an example of what *not* to do.

Legalize Production and Sale of Recreational Drugs

I’m quite happy to see that criminal justice reform is a major political issue and drug legalization is becoming more respectable. There is a serious push to fully legalize marijuana. There is also serious talk of decriminalizing other drugs and treating their abuse as a public health issue rather than a criminal justice issue. My concern is that reforms will stop short of the reforms truly needed to clean up the drug problem: legalizing the production and sale of all currently illegal drugs.

So-called “drug-related harms” are most often effects of prohibition, not effects of the drugs per se. Heroin is deadly because the users do not know what dosage they are taking. They are also unaware of any adulterants, and that has very recently become a serious problem. The heroin overdose death rate has spiked in very recent years because suppliers have been spiking their product with fentanyl, which is something like 500 times as powerful as heroin. The margin for error with such a product is small, and it’s being mixed by amateurs. That’s a deadly combination. The spread of HIV/AIDS in IV drug user communities is the effect of a prohibition on the sale of clean needles. The problem has been mitigated in recent years because the government has allowed grey-market (technically illegal but tolerated) needle exchange to operate, but an IV drug user *still* can’t walk into a nearby Walgreens and buy a clean needle. Intravenous drug use is itself an outcome of drug prohibition. Prohibition drives the market price of heroin very high (an *intentional* outcome of the drug war), and the result is that IV drug use is the only economical way of actually getting high. American soldiers in Vietnam had nearly unlimited access to extremely pure heroin, and they mostly snorted rather than injected it. If that were the world we lived in today, where heroin was cheap enough that high-seekers could easily purchase a snort-able dose at Walgreens, IV drug use would be a fringe phenomenon or would cease to exist at all. 

Other drugs that kill significant numbers of people are cocaine and meth (amphetamines are listed as “Psychostimulants with potential for abuse” in CDC death data, lumped together with similar prescription drugs like Adderall). These aren’t typically taken intravenously, but many of the harms related to these drugs could still be mitigated under a legalization regime. The existence of crack cocaine owes itself to the drug war’s efforts to increase the market price of the drug; this is exactly analogous to the IV heroin use phenomenon. The price of the drug is driven so high that the only economical way for users (particularly poor users) to get high is to smoke it. The harms related to these drugs are grossly exaggerated, but it’s certainly the case that they could be made safer. The production side in particular could be made much safer. Instead of having thousands of meth labs in residential buildings, you could have one or two large meth factories supplying the entire US market. You wouldn’t have residential properties being poisoned (along with their inhabitants) by the chemicals used in meth production, and you could (once again) ensure that the final product is free of adulterants.

There are probably people who are so impulsive and so prone to self-harming behaviors that they will still use drugs to hurt themselves, even under a legalization regime. Our goal should be to make these people *safer*. We should make it *harder* to accidentally overdose. The current regime makes it far easier. The drug warriors have effectively tried to deter drug use by making the drugs far more lethal than they would otherwise be. I don’t know how many of them explicitly state this as their goal, but given that this is a predictable consequence of the policy they’ve visited upon us, I think it’s appropriate to blame them for a large share of the overdose deaths that happen every year.

We will never achieve the goal of reducing drug-related harms without legalizing them. Imagine if you could purchase heroin from a pharmacy. The box could be labeled with the exact dosage, which was mixed by professionals in a regularly audited factory. A clean vial could be sold in the same box as a clean needle, and a disposal center for the previous needle could be right there in the same pharmacy. “This product does not contain fentanyl!” could be printed on the box, and that promise could actually be made credibly. The box could remind users “Do not take with alcohol, benzodiazepines, or the following other drugs… Tell the pharmacist if you are taking any medications.” Naloxone, the antidote to an opioid overdose, could be available for sale along with clean heroin. Of course, the pharmacist could ask for ID, possibly a prescription or a licensing card certifying that you’ve taken the mandatory safety course (a harm-mitigation policy that’s once again *only* feasible in a regime of legalization).

Perhaps you could pick up a few other items while at the store. You could pick up some coca tea, mixed at a concentration low enough to get you just shy of the buzz you’d get from a Starbucks coffee. Or perhaps you get the same active ingredients in a lozenge. You may as well pick up a low-dose amphetamine while you’re at it. There is nothing inherently dangerous about any of these chemicals, and there is nothing inherently harmful or immoral about using them recreationally. But the full slate of harm-reducing measures can only be implemented in a regime of legalization on the supply side.

“Decriminalization”, the impotent little sibling of legalization, won’t get us there. Not even close. Decriminalization would certainly be an improvement over what we have now, but it will leave most of the major drug-related harms in place. Half-hearted reformers who wish to decriminalize only possession (but not production and sale) are not very compassionate. It’s as if they recognize that recreational drug users are walking through a mine field and they insist on digging up only a fraction of the mines. It’s as if they think the remaining mines only detonate when suppliers and dealers step on them. No, those mines meant for suppliers hit the consumers, too, through the mechanisms described in the above paragraphs. This is what false compassion looks like.


The voting public needs to get over its puritanical belief that it’s wrong to make a profit from someone else’s vice. I think many people, drug warriors and reformers alike, have a sense that drug users are victims and the suppliers are evil people who harm the users against their will. This is mistaken. It is the user who chooses to consume a recreational drug. Even if you insist that the user is an addict who can’t control their behavior, they at least made the choice to risk becoming an addict the first time they indulged. The real vice here is moral posturing, and the public is indulging this one to the hilt. And people are dying because of it.

Tuesday, May 3, 2016

Emotional Argument as a Commitment Strategy

There are two basic strategies for arguing with someone who you disagree with.

The first is to presume good faith on the person you disagree with, presume them to be a mostly honest truth-seeker, and employ logical reasoning, facts, figures, mathematics, and statistics to convince them. This strategy requires that you check your emotions and refrain from morally judging the person with whom you argue.

The second strategy is to claim the moral high-ground as soon as the argument begins, and get very upset at the person who holds an unapproved opinion. You presume (or at least *act* as though you presume) bad faith on the part of the person you disagree with. And you cast them as culturally backwards, or otherwise stupid or wicked.

I think the second strategy is sometimes called for. I see it as a commitment strategy. Sometimes you have to get something done and there is no time for the niceties of an open inquiry. Suppose you are standing up to the king of the most powerful nation on earth. You could write “A Philosophical Treatise Defending Novel Moral Claims of the Signatories.” Or you could write, “We hold these truths to be self-evident…” and berate the king for his misbehavior. You commit strongly to a position, and you signal that you are unwilling to reconsider. It’s perfectly rational to do this in some contexts. If someone proposes a return to state-sponsored racial segregation or the death penalty for marijuana use or mandatory licensure for the act of becoming a parent, there is no reason to take them seriously. If a proposal is self-evidently morally despicable, it’s sometimes appropriate to simply shout it down.

The problem arises when you jump the gun too early. You may commit to the *wrong* proposals, and it might turn out that the people you are insulting actually have something meaningful to say. If you always use the emotional commitment strategy, you end up looking silly. Because half the time you’re probably wrong, which poisons your credibility for the other half of the time. There are people who are willing to change their minds and who respond to information and logical arguments; I’ve seen people change their minds very quickly when they realized they were mistaken about something. On the other hand, few people respond well to being scolded about their misguided values. You may manage to shut someone up by making certain ideas unwelcome, but you are very unlikely to change hearts and minds with this approach. In fact, this strategy is likely to leave a silent majority seething with indignation, but still lurking in the background. They don’t publicly air their unwelcome ideas anymore, but they still hold onto them. They may even vote a walking avatar of their bigotry into office.


Personally, I’d like to see quite a bit more of the first approach. Be a little more willing to consider ideas you don’t like. People who hold them aren’t all stupid or evil. At least understand the arguments of their more articulate proponents. Nothing important is going to change unless we all do this a little more often. Moral progress happens because some of those “morally despicable” ideas turned out to be right. 

(I wrote most of this post over a year ago. With Donald Trump now a stone's-throw from the Oval Office, I couldn't resist sharing.)

The Wrong Way to Trim Government

Our large, interventionist regulatory-welfare-education-police state scares me, but I think a big regulatory-welfare-education-police state that waxes and wanes at random scares me even more.

I favor smaller government, but I don’t necessarily favor the way it’s getting done. Brinkmanship over budget deficits and funding is leading to very sudden shut-downs of government agencies. It’s happening in a big way in Illinois, and it’s looming at the federal level, too. I’d prefer to see a slow, deliberate phasing out of bad programs that don’t pass a cost-benefit analysis. Some government programs are so destructive and wicked that they should be ended immediately, but others are benign (even if expensive in $$$ terms). The benign ones could be slowly phased out so that people have more time to adjust to changing circumstances.

While a government shut-down that happens abruptly really is disruptive, people are wrong to bemoan the disappearance of some of these programs. The assumption is that if the government doesn’t do it, it won’t be done at all. That assumption is wrong for many of the things government does. Government hires people to perform a service, and collects money from the (supposed) beneficiaries of that service. Person A performs a service for Person B, with the government serving as an intermediary. There is no reason to assume that A and B can’t find each other without the government’s help. Schools can still find students, and vice versa, without government acting as an intermediary. Someone can build a road between points X and Y, for any such cases where a road between X and Y is useful, and the users of the road can pay the cost through tolls. Alternatively, businesses can build roads in anticipation of increased traffic to their storefronts. Government provides few true “public goods,” those non-excludable non-rivalrous goods that free markets are (again supposedly) unable to supply. If a government program shuts down and Person A and Person B stop exchanging services and payments, it probably means that the value of those services aren’t actually worth what’s being paid for them. It probably means that Person B values the services of Person A at *less* than what A is currently getting paid. That’s a program that really should come to an end. If we move more of society’s resources into the private sphere where exchange is voluntary, then we could expect only those exchanges where the service is worth the price to continue.

Of course there are exceptions, and I’m willing to entertain the idea that *some* government programs provide true “public goods.” It’s possible that some government programs are cost-justified but wouldn’t happen in a free market because of externalities and public goods considerations. But that’s all the more reason to start trimming, and start trimming *now*. Cut out those programs that aren’t obviously solving public goods problems. That way the ones that *are* don’t get cut every time there’s a budget stand-off. I happen to think we can get away with having a minuscule government, perhaps 5% of its current size, or even 0%. (I’m far more confident about the 5% than the 0%, but I’m comfortably agnostic about the exact location of the optimum level, at which point further cutting would suck more than it helped.) But suppose I *did* want a larger government. If I truly believed we needed a large interventionist state to solve externality problems, I would be very pissed off at all the wasteful government spending that made those necessary government programs unaffordable.  I’m disturbed by the near complete absence of fiscal hawks on the left, who should be saying, “Let’s be good stewards of taxpayer dollars. Let’s preserve the moral legitimacy of the state. Let’s push back against public sector interest groups that take more than their fair share. Let’s even trim some ‘nice to have’ government programs in favor of the ‘need to have’ government programs.” Whatever your goals are, and whatever you think is the optimum size of government, there are finite resources to be managed and allocated. We can’t do away with trade-offs.


These next few years will be interesting. Budget standoffs are looming in a big way, in my state (Illinois) and also at the national level. I think some people will realize that they actually can get on with their lives without government when the need arises. Some people will go through a painful adjustment process, but they’ll get back on their feet once they realize those government dollars aren’t coming back. Some people will be thrown out of their state jobs (or state-subsidized jobs), but assuming they have useful skills they will find employment elsewhere. Some folks will figure out that they can actually get by just fine without their subsidies, and in fact those subsidies may have been holding them back. Even if the budget cuts never in fact materialize, the prospect of big budget cuts will inspire some people to switch to more secure work (I know for a fact that some people are doing this already). This isn’t my preferred approach to cutting government, but I think it’s the logical outcome of an irrational political system. A rational electorate that asked only for cost-justified programs and disciplined budgets would not see this kind of instability. What we actually have are irrational mobs demanding always more for their coalition, making unreasonable demands and adopting a stance of unwillingness to compromise. It’s a good way of getting what you want, if you don’t really care about the rest of society or even your own long-term interests.  

Monday, May 2, 2016

Seeking a Fair Framing of the Pension Crisis

I see a lot of these “a pension is a promise” posts. Some observations and some questions.

The Illinois state government didn’t save sufficient funds to pay these liabilities, so the entity that made the promise can’t pay for it. The money just doesn’t exists. That means that Illinois taxpayers must foot the bill for these liabilities. So far that’s just an observation. I’m not saying they “should” or “shouldn’t”, but let’s frame the discussion in terms of who owes whom in a way that acknowledges the interests of *both parties*. In a conflict of rights, it’s fine to argue that one party’s rights supersede another’s, but it’s rude and dismissive to pretend that one party’s rights don’t even exist. It seems to me that there’s a missing argument here. What needs to be argued for is being implicitly assumed, perhaps so the speaker can rhetorically avoid the uncomfortable confrontation with the counter-party.

My questions:

1)      Do Illinois citizens have a duty/responsibility/obligation to pay for unfunded liabilities created by the state of Illinois?

2)      Does this obligation exist even if the conditions creating the unfunded liabilities were put in place before you were born?

3)      Does the obligation disappear if you move to another state? Or should the obligation follow you to whatever state you move to? Do you have a moral but legally impractical/unenforceable obligation to pay the unfunded liabilities whether you move or not?

4)      When your labor leaders were negotiating these huge pension payouts, didn’t they have an obligation to either 1) increase *current* taxes or 2) increase payroll deductions to fund them?

5)      Doesn’t the failure mentioned in 4) shift *some* of the blame back to the pensioners and their labor leaders?

6)      A presently unfunded liability is, quite predictably, a future catastrophe. If you sign up for that deal, don’t you tacitly agree to assume *some* of the pain from that future catastrophe (given that it *was* predictable in the first place)?

It might make some sense to “spread the pain around” a little. We don’t have to gore just one person’s ox. There is plenty of ox to gore here. It probably makes sense to make Illinois taxpayers bear some of the costs, and make pensioners bear some of the costs in diminished benefits. (The diminishment should be proportional to the distance from retirement for future retirees and proportional to ability to find gainful employment for recent retirees, so that people who are in a better position to bear the cost do so. A retired octogenarian pensioner shouldn't be cut off from their only income source, but a 30-year-old future pensioner should be forced to plan as though the promised pension isn't there, which it isn't.) But please, let’s not anybody say, “Hey, don’t gore my ox *at all*.” If you are hostile to any reframing of this issue that acknowledges the counter-party (whose ox you insist on fully goring), you aren’t thinking clearly.

All I’m seeking in this post is a fair framing of the discussion. I’m not trying to adjudicate who is right, or who should bear what fraction of the burden. So read all of the above with that in mind. But I’d be remiss if I didn’t share my own opinions. I think the parts above stand, regardless of how, in particular, I answer all the questions I raised. That being said, what follows is my short summary of the political economy that led to this crisis. It’s not flattering to labor unions or politicians.


My own view is that past politicians and public sector labor leaders made a very cynical deal. Politicians wanted to buy the votes of public employees by promising higher pay (a pension is merely deferred salary). Illinois taxpayers weren’t willing to absorb a tax increase necessary to pay for this bribe. Politicians, knowing the crisis would come only decades later, promised big pension payouts instead of current salary increases. Public employee unions, recognizing that taxpayers wouldn’t shoulder a big tax increase, accepted the “promised” pensions, assuming that future politicians would increase taxes when the time came. This isn’t to say that all state employees were explicitly thinking in this cold, calculating manner every day, but this sinister deal was always there in the background. Now it’s blowing up. Everyone was behaving cynically. (One could argue even the taxpayers have behaved cynically, insofar as they’ve wanted to kick this can down the road rather than addressing it when the problem became obvious.) I wish the problem had been brought to a head decades ago when it was manageable, rather than today when it’s a full-blown catastrophe. But here we are.