Sunday, July 28, 2019

Prescription Opioid Abuse Trends in the 1980s and 1990s

Most discussions of the so-called "opioid epidemic" constrain themselves to the 1999 to present period. This is mainly due to data limitations. 1999 is the first year that the CDC started coding deaths according to the ICD-10 system. The years 1978 to 1998 were coded under ICD-9 and are not directly comparable, so most tabulations start in 1999. And the SAMHSA reports, like this one, present time series of opioid abuse rates starting in 2002. I've shared this chart from the SAMHSA report in a few previous posts:

Plainly prescription opioid abuse is flat then declining over this period. (The SAMHSA survey refers to opioids with the dull term "pain relievers".) And, looking at the CDC data, it's clear that drug poisoning deaths are increasing over this period. But it makes little sense to restrict ourselves to this time frame. Attitudes about opioids and prescription practices supposedly started changing in the early 1980s, with doctors becoming more relaxed and more willing to prescribe. A very short letter published in the New England Journal of Medicine in 1980 supposedly kicked off the change in attitudes toward opioids. By starting in 2002 we would miss two decades of the relevant trend-line. There is another survey that tracks opioid use, Monitoring the Future, that goes back to the mid-1970s. But it only tracks 12th graders, who won't necessarily be representative of the older patients, people with chronic pain, limited mobility, and various infirmities, who are most likely to be affected by loose prescribing practices. Here it is, anyway:


Plausibly, this tells a story of opioid abuse rising in the 90s, from ~3% to ~9%. But then, in the 2000 to 2010 period, when opioid prescriptions tripled, the abuse rates are roughly flat, even declining. And abuse rates are trending down in the 1980s. One has to cherry pick carefully to make it look like prescription practices are correlated with opioid abuse rates. Also problematic: "Past 12 month use" patterns might be very different from, say, addiction rates or more frequent drug use rates. It could be that infrequent, casual drug use is increasing but not really causing a problem.

The easily accessible data, for the early 2000s to present, seems to debunk the standard narrative of the "opioid epidemic." My most popular post ever is a long take-down of this narrative. Also see this Cato paper, which I had the honor of helping to write. Opioid abuse rates are simply not correlated with opioid prescription rates. In fact, when restrictions on prescription opioids started to bite in 2010 to present, heroin overdoses started to skyrocket, followed promptly by fentanyl and other super-opioids. Some proponents of the standard narrative respond to this by speaking of "stocks" and "flows" of addicts. In this story, the 1980s and 1990s left us with a stock of opioid addicts. The increase in prescriptions in the early 2000s didn't much change the rate of opioid abuse, because we were already saturated. (I'm guessing here; I can't find an instance of someone coherently articulating this story.) Then, opioid restrictions starting around 2010 drove the existing stock of addicts to heroin and illicit fentanyl. Proponents of the standard narrative can still claim that, if it hadn't been for the relaxation in prescription standards starting some time in the 80s and 90s, we wouldn't have the current crisis. But this depends on speculating about what was happening during this time frame. Data is almost never forthcoming, so I've had to check for myself.

My previous attempt to reach further into the past is written up here. I excerpted a chart from Lies, Damned Lies and Drug War Statistics that shows lifetime use for "nontheraputic use of prescription drugs." Even here, the timeline only goes back to 1990, and "lifetime" use is problematic for the reasons described in the  paragraph above. Also, it includes drugs other than prescription opioids (mostly benzidiazepines, barbiturates, and ADHD medications).

Then I found these files. It's the National Household Survey on Drug Abuse going back to 1979. One can open each years' study (on the right side of the page), click on the link under "Datasets in this Study", then click on the Codebook.pdf link under Dataset Documentation, and there is a useful summary of that years' findings. Here's what 1979 to 1998 look like:

It's hard to say there's an obvious trend-line. I mean, clearly the 1998 point is higher than the point for 1979. Crudely running a regression yields a very slightly upward sloping trend-line (though the time trend is not statistically significant by the traditional p < 0.05 standard; I'm getting p = 0.60). But it just looks fishy. Note that the early surveys are only done once every three years, then start being done annually in 1990. Did opioid use really triple from 1979 to 1985, then plummet in 1988? Something about this screams "bad methodology", or at least "changing/non-comparable methodology." It seems like SAMHSA was just getting its shit together in the early days, and these data represent some kind of reporting bias rather than real trends.

Here is what happens in the 2002 to present era:

The trend in this chart matches the chart pulled from the SAMHSA report at the top of this post. "But, this chart says 3% of people used prescription opioids in the past month, at least for the flat part from 2002 to 2010. The chart at top says it's hovering around 2% for 12+ year olds. What's the difference?" This point initially confused me. The chart immediately above is "% of respondents." I think the SAMHSA report is taking this raw survey data and restating it to reflect the distribution of the American population. So if the survey over-samples young people (who use drugs at higher rates), the "% of respondents" will be high compared to actual rates of drug use in the population. I assume some smart people at SAMHSA thought of this and restated "% of respondents" to reflect "% of U.S. population." There must be significant differences in year-to-year sampling, because the downward trend from 2010 to 2014 is more intense than in the top chart. Here's a telling excerpt from the 1988 code book:
In 1979, respondents from rural areas were oversampled, and in 1985 and 1988, blacks and Hispanics were oversampled to increase the reliability of estimates of drug use of these important groups.
In this light, the three points from 1979, 1982, and 1985 make a lot more sense. Clearly the SAMHSA people think these populations differ in their rates of drug use and are changing their sample to collect the right data. But this makes "% of respondents" non-comparable from one year to the next. If someone has taken the 1979 to 2001 surveys and converted "% of respondents" to "% U.S. population", I haven't found it. (Maybe this is a useful project for some grad student. See the bottom of this post for ideas to explore this further.)

Notice another feature of this data, one which I've discussed previously: the survey changed from asking about "non-medical use of prescription opioids" to asking about "misuse" in 2015. (I have the change marked on the graph, with "misuse" years marked in blue.) I don't know why they did this. "Non-medical use" means basically recreational use. "Misuse" includes recreational use and medical use not intended by the physician. For example, someone takes more pills than recommended to treat their acute pain, because the recommended dose isn't cutting it. Or someone has left-over pills from a previous surgery and uses them for a sprained ankle. "Misuse" is a more inclusive measure than "non-medical use". It's interesting to note that the trend continues to fall after 2015 even though it's using a more inclusive definition.

I want to be fully transparent here and show you the full time series. I had initially just pulled the data up to 1998 and thought I had a good post worth sharing. But something interesting happens in 1999. In the 1979 to 1998 surveys, it asked about prescription opioid abuse using the somewhat obscure term "analgesics," while making clear that it is not asking about Tylenol or ibuprofen. This doesn't completely leave survey respondents in the dark if they don't know that word; it also asks specifically about a list of opioids (demerol, dilaudid, hydrocodone...). In contrast, the 1999 to present surveys ask about "pain relievers". If I took the numbers literally, prescription opioid abuse was ~1% in 1998, doubled to ~2% in 1999, and hit 3% by 2003, then flattened out for a decade The sudden jump to 2%, after hovering right around 1% for the prior decade or two, is almost surely an effect of the survey wording changing. I don't know exactly why it would have gone to 2% for a couple years before jumping up to 3%, rather than just jumping straight to 3% in one shot. I would think a change in survey language would cause a one-time jump. It's possible that use rates really are increasing during this period. Also, once again, the sample population may be changing, such that "% of responses" doesn't mean the same thing as "% of the U.S. population." So it's hard to say what's really happening. (Note that the figure from Lies, Damn Lies, and Drug War Statistics, which I shared here, also brackets off the 1998 to 2002 period, as if to point out to the reader there is something peculiar with those years.)



I think it's implausible that opioid abuse actually tripled on those short years, then flattened out. This doesn't match any version of the opioid abuse narrative that I'm aware of. Attitudes about opioids had already been changing for perhaps two decades. There were already moral panics against prescription opioids in 1997, to which Reason Magazine responded with this piece by Jacob Sullum. Pain patients in the 1990s were having trouble finding doctors who would treat them, and doctors who mercifully served chronic pain patients were facing criminal charges for prescribing "too many" opioids.

This is the great frustration I have with peddlers of the "opioid epidemic" narrative. They don't seem to have any kind of coherent timeline in mind. In fact, I once discussed this with someone who researches this stuff for a living, and we were trying to figure out which of several possible competing narratives they subscribe to. Are 1) normal pain patients just dropping dead from the normal use of their legitimate prescriptions? Are 2) normal patients turning into addicts, who then intentionally misuse their prescriptions? Are 3) normal patients not, in fact, dying at excessive rates from overdoses, but diversion of their pills to the black market driving an epidemic of addiction and overdoses? Or 4) do proponents of the "opioid epidemic" narrative not even have a coherent enough story to distinguish between the various competing causal chains? (The list above is by no means exhaustive, and does not contain my preferred version of the story.) These different stories make different predictions about the trendlines in the CDC overdose data and the SAMHSA drug use data, and they make specific predictions about how these data should overlap. If 1) is correct, you could see drug poisoning deaths from opioids increase without any evidence of increasing abuse or addiction rates, which is in fact what we see. (Unless we count the 1998 to 2002 tripling in "past month use" as a real trend, and I argued above that we shouldn't.) Story 2) requires seeing an increase in abuse rates somewhere along the timeline. Story 3) probably does, too, unless the story is that the population of users doesn't increase but they are all using more intensely. The problem is that journalists and politicians who tell this story never bother to nail themselves down. It's not really clear what they are claiming, so it's hard to dispute their claims. They just vaguely know that opioid prescriptions increased and that opioid-related drug poisonings increased subsequently. For a book-length version of this story that's high on anecdote and low on data, read Sam Quinones' Dreamland. It wraps together a nice story, but without actually overlaying the trends in opioid prescriptions, addiction or abuse rates, and death rates, it fails to actually say anything. It's a nice case study in how to not do policy analysis. That would require scrupulously specifying the causal mechanisms and showing that these comport with the data. (Most of the action in Dreamland is from the 1990s, not from the early 2000s when opioid prescriptions tripled, and not in the 2010s when heroin overdoses started skyrocketing. Is the 1990s when the "stock" of addicts was rising? I wish someone would clarify.)

Here is the full time series, 1979 to present.


I have it color coded to show the various changes to survey wording, explained above. It's only with hesitation that I share this, because I don't think the points from different years are measuring the same thing. But in the spirit of full transparency, here it is. If I were less honest, I might have only shared the piece that was missing from my previous posts, the 1979 to 1998 period. If I were a demagoguing drug warrior, I might emphasize the 1998 to 2003 transition as a "spike in opioid abuse" without disclosing the differences in surveys. What looks like a "tripling of opioid abuse rates" is really three intermediate data points (1999 to 2001) in between a low plateau and a high plateau. Data is never as clean as we'd like. Even in a modern, developed nation with reasonably good institutions, national vital statistics are garbage. I'm left with the dueling feelings that we should either 1) double the meager resources spent collecting and analyzing national vital statistics or 2) get completely out of the business so that we stop sparking these unnecessary moral panics. My preference is for 2), given the ease with which a spurious trend turns into a set of very bad policy prescriptions. Option 1) could in principle be done right, with an appropriately alert political class and with sufficiently diligent and self-critical journalist and sufficiently aware voters. Unfortunately for option 1), those important qualifiers are missing in the real world.

I am shocked at how hard it was to find any source for this data all compiled in one place, and yet how easy it was to actually get to it and cobble it together. Anyone could have spent about 15 minutes text searching the SAMHSA code books for the "analgesics - past month" (and later "pain relievers - past month") to pick out the relevant figure for each year the survey was done. Those data are problematic for the reasons explained above, but it's baffling that nobody even tried. The closest I ever found was the figure of "lifetime nontheraputing use of prescription drugs" from Lies, Damn Lies, and Drug War Statistics. What I've done in this post is hardly satisfactory. The raw underlying survey data is available online. (See this link, right side. Click the year you want, then click the link under "Datasets in this Study", and you'll see the survey data available in several formats.) There are a lot of columns (~2,000) to parse, the columns are poorly named, and the contents are written in code (like "1 = male, 2 = female" rather than stating the contents in plain English). But it's the kind of thing a grad student with a free summer could easily hack through. I'm surprised that nobody has thrown the resources into such a project. If they have, it's been very difficult to find. Feel free to correct me in the comments if you find a source where someone has done this.

__________________________________

Please allow me to totally geek out for a moment here. If someone wanted to take this data and convert "% of respondents" to "% of the population", it wouldn't be that hard. All you'd have to do is run a few regressions. The surveys contain various demographic variables, like age, gender, race, and martial status. The regression models would use these as variables as predictors and targets "past month use" as the dependent variable. Each year's survey could have its own regression model, which characterizes the "past month use" rates for that year. Then one can simply create a synthetic data set that represents the demographic distribution for each year (say, "0.2% of the population is white, male, unmarried, 16; 0.3% of the population is white, male, unmarried, and 17, ...") and get the regression's predicted drug use rates for each demographic, then weight them together for a total population use rate. Alternatively, if the goal is to remove the effect of changing demographics, you could use one year's distribution of demographics and apply each year's regression model to this data set. I keep saying "regression", but I'd be tempted to use a gbm or some other kind of tree-based model for this project. A process like this would make the survey data comparable across years. It should flatten out the 1979 to 1988 data points, or otherwise reveal a real trend if there is one. Anyway, it would correct for sampling differences between years, some of which seem to be deliberate attempts to capture under-sampled populations of past surveys.

Friday, July 26, 2019

Recent Goodwill Story

Recently the local branch of Goodwill got some bad press. The story is here. I heard it on NPR, which is what my alarm clock plays when it wakes me up in the morning.

Whenever I dig into the details of a popular news story or the Outrage of the Week, I find that the dominant narrative is wrong in important ways. This one was no different. The story is being reported as: Goodwill decided to lay off all its disabled employees. See the very first sentence of the State Journal Register story above.
A day after Land of Lincoln Goodwill Industries reversed a decision to lay off workers with disabilities because the state’s minimum wage is increasing, the organization’s president and CEO submitted her resignation.
The NPR story used similar language. My immediate reaction was, "WTF? There's no way that's correct." At work, I had just been through a corporate management training event, a day-long session on interviewing skills, which included a long discussion of which things are not legal to ask in a job interview or use as qualifying criteria for a job candidate. It is completely illegal to make hiring or firing decisions on the basis of someone's disability status. (There was even a video with an actor playing the clueless hiring manager asking an older lady "Are you disabled?", and the lady making an annoyed face.) You can state the physical requirements of the job and ask if the candidate can handle them, and presumably you can fire someone after their job performance makes clear that they can't handle a job. But the news story was making it sound like Goodwill identified all of its disabled employees, marked their personnel file with a big red "D", and announced it was going to fire them. That's not what happened.

Goodwill actually runs a special training program for the disabled, ex convicts, and other people who have trouble finding meaningful employment. See their own description of their program here. Or read about Jonny at the bottom of this document. (These are Goodwill sources, so they may be biased, but if you find a well argued piece that's critical of Goodwill feel free to share it.) These are not traditional employment, so they have an exemption from the minimum wage.
Section 14(c) of the FLSA allows employers to pay wages below the federal minimum to employees who have disabilities that directly affect their job performance. Employers are able to do this through a special minimum wage certificate obtained from the U.S. Department of Labor’s Wage and Hour Division.
Some commentators try to argue that Section 14(c) is just a "loophole" that is cynically used by employers to exploit disabled workers. But this is wrong. The sad truth is, having a disability (depending on the disability) makes you generally less productive, and thus less valuable to an employer. Many of these people would not find employment at all if all employers had to pay them the minimum wage. Section 14(c) was explicitly built into the Fair Labor Standards Act because even advocates of the (then new) minimum wage realized it would throw the least productive members of society out of work. See this (generally critical) paper:
Section 14(c) of the FLSA included an important exception to the innovative minimum wage for people with disabilities 21 that, at the time, did not alarm the legislature. It was based on definitions and classifications set forth in the National Industrial Recovery Act (NIRA) of 1933. Under NIRA, President Roosevelt defined a person with a disability as one "whose earning capacity is limited because of age, physical or mental handicap, or other infirmity." Section 14(c) stated:
The Administrator, to the extent necessary in order to prevent curtailment of opportunities for employment, shall by regulations or by orders provide for.. .(2) the employment of individuals whose earning capacity is impaired by age or physical or mental deficiency or injury, under special certificates issued by the Administrator, at such wages lower than the minimum wage . 
Citations omitted.

So even Roosevelt was conceding (quite explicitly) that certain conditions make workers less valuable and built in an escape hatch to spare them the disemployment effects of the minimum wage. I believe some critics of 14(c) think that these employees will all keep their jobs if we did away with it, they'd just make more money. That's a pretty implausible assumption.

(By the way, I hate this usage of the term "loophole." Section 14(c) is a feature of a law that's doing exactly what it's supposed to be doing, not some clever hack that wasn't intended by the authors.)

Back to Goodwill. They are running a program where disabled people self-identify in order to get job training and some experience (and, plausibly, a sense of purpose in a life that would otherwise be spent in unemployment). If they have disabled employees who got their job the normal way, going through the usual application process, these people would not have been targeted for layoffs. It's not like Goodwill grabbed everyone in a wheelchair or in crutches and ushered them out the door; they are running a charity and decided to be slightly less charitable along one dimension. Legally, Goodwill has to be agnostic about their regular employees' disability status, even if it's something obvious.

In a statement, Goodwill had mentioned rising minimum wages as a factor in their (now reversed) decision to lay off employees under their 14(c) program. Some people were quick to criticize this rationale, because 14(c) explicitly allows them to pay less than the minimum wage. But Goodwill is right to be concerned about minimum wages, because there is a lot of political activism aimed at ending this exemption to the minimum wage. In fact, the United States House just recently passed a bill that would 1) increase the minimum wage to $15/hour and 2) removes the exemptions available to some workers. (The Reason piece doesn't mention Section 14c, but this Reuters piece makes clear that that's what the bill is targeting.) This likely won't pass the Senate, so probably won't become law. But Goodwill is surely following these efforts and trying to get ahead of them. If they suddenly have to adopt all of their job trainees as full employees and pay them $15/hour, that's likely to be a massive financial hit, possibly a fatal one. I hope all charities are as scrupulous about managing their finances as Land of Lincoln Goodwill. Some clueless commentators also pointed out that Illinois' minimum wage hasn't even started to increase yet. The $15/hour minimum will be phased in over the next several years, but the first increase hasn't hit yet. This criticism makes no sense. Businesses and (presumably) charitable organizations do long-term planning. They look ahead to manage their expenses. If they know that a minimum wage increase is coming and will soon increase their labor costs, they will start responding to it now with layoffs and other forms of cost curtailment.

(This is actually a major criticism of the minimum wage literature. Many studies find "no effect" on employment, but any effect is likely to be understated because employers are anticipating these kinds of changes, even before the law gets passed. They are likely to have already taken steps to mitigate the impact. It's not like they're in a binary state that's one way before the law passes and the other way after. They make probabilistic assumptions about what their future costs will look like.)

The public backlash is really unfortunate, and so are the efforts to end Section 14(c). Organizations running similar services for the marginally employed now have this hanging over their heads. They know that they can't walk back a program if it starts to become a political and financial liability. Anyone who is currently thinking about beginning or expanding such an operation is likely to have second thoughts about it now.
_____________________________

Maybe it's just Goodwill propaganda and I'm falling for it, but here is their story about Jonny:
But getting to Goodwill wasn’t easy. Johnny was born in the late 1970s with a rare trisomy chromosome imbalance, which limits his speech and cognitive abilities, in addition to other developmental and physical disabilities. When he was 10 years old, he was assaulted by an adult caregiver and became scared, withdrawn and rebellious. For many years, he wouldn’t go out in public or speak to people other than his father — the only person he trusted. But his father didn’t give up. In fact, he became a passionate advocate for his son.
Over the years, Butch left no stone unturned in seeking help for Johnny, and he even moved to Dallas, OR, where his son could live in a facility that he’d heard was “the best.” But Johnny wasn’t receptive to the help of the facility staff and wouldn’t talk to anyone. When the Butch first learned of the programs at Goodwill Industries of the Columbia Willamette (Portland, OR), he was hesitant, but gave them a try.
Johnny enrolled in the Goodwill’s Community Integration Project II, which provides employment and vocational training to people with multiple and/or severe disabilities under a special minimum wage certificate. The Goodwill’s staff recalls that when Johnny first entered the program, he was crying and shaking. But through training, he learned basic vocational skills and appropriate workplace behaviors. 
“Johnny has transformed from a frightened and profoundly insecure person into a confident and integrated young man,” says Michael Miller, the agency’s president and CEO. “His success today is a product of his father’s devotion, coupled with Goodwill’s intervention.”
Maybe Jonny is a cherry-picked example of the most sympathetic individual Goodwill could find, and I'm naively falling for their trick. But there are certainly people like Jonny who wouldn't be able to find meaningful employment at the current minimum wage (much less the absurdly irresponsible $15/hour some activists are peddling). I think about the pan handlers I see downtown on my lunch break. Some of them have obvious disabilities. It's hard to imagine any employer taking a risk on these people knowing they'd have to pay $15/hour. It's unlikely that most of these people could add that much productivity to the employers bottom line.

I've said quite a lot in this post about worker productivity, the value of an employee to the employer. I hope no reader mistakes this for the value of the person. It's not a statement about a person's moral worth or the value they bring to their friends or family. My very young children are incredibly valuable to me, but of no value to any employer. In fact, they probably would impose negative returns on any employer trying to coax meaningful work out of them, given the amount of instruction, monitoring and double-checking required to get a task done. Your value to an employer is not the same as your moral worth (however you might measure the second thing). Think what a non-sequitur it would be for a parent to drop off their teenage son to work at McDonalds and then get indignant that their child was "Worth infinitely more than $7.25 an hour!" ("Um, perhaps you are confused about the nature of this transaction. I'm not trying to buy your son from you, ma'am. I can only afford to pay him what he adds to this store's revenues, at most. Which, unfortunately, is not that much.")  This seems like an obvious point, but I've seen this mistake enough times that I wanted to preempt it. People need to drop this idea that your inherent moral worth as a human being imposes a duty on an employer, the duty to pay you some minimum amount for your labor. No, that depends entirely on what you add to the employer's bottom line. It's a morally neutral concept. By moralizing this concept, some misguided activists are saddling us with bad policy and casting inherently employable job-seekers out of work. To expect our value judgments to be fully reflected in market prices is just crass materialism.

Wednesday, July 17, 2019

Sneers About “The Koch Brothers” and "Koch Money"

It’s disheartening that name-calling is sometimes accepted as a serious argument in modern political discourse. There are plenty of examples of this behavior, but I have in mind the sneer that some commentator or some piece of scholarship is “Koch funded.” Sometimes it is sufficient to merely insinuate that there is a tenuous connection to the Koch brothers. For example, a scholar who once published something with Cato but is now working for some other outlet, perhaps even speaking his own mind, not on behalf of any institution, can be permanently slapped with the “Koch money” sneer. Of course, this sneer isn't specific to the Koch family; there are plenty of morons droning on about "Soros money" instead of engaging meaningfully with the arguments.

This behavior is so infantile I’m tempted to just not react to it, just as I would ignore a tantrum-throwing child. But then again it happens often enough that it’s worth responding. I recently saw Michael Cannon at Cato on a C-SPAN event. He was discussing health policy. A viewer called in just to say that Cannon shouldn’t be listened to because he’s associated with Cato, and that the Cato voice shouldn’t even get a hearing in our political discourse. Of course he babbled something about Koch money. The insinuation is always that these commentators are being paid by the Kochs to distribute their message, thus rendering them unreliable as sources of information. I want to explain just how utterly wrong this is.

I recently had my name on a paper published at Cato, something for which I am very proud. It was a short paper on the so-called opioid epidemic, basically explaining why the standard narrative is wrong and the policy implications are pretty much the opposite of what some careless commentators have inferred. I have been writing about this since early 2016. I have numerous blog posts explaining why I’m skeptical of the standard story. I have done a deep dive on the CDC’s mortality data, and on the pages of this blog I have posted some novel (novel as far as I can tell) pieces of analysis on that data.  I’ve been happily giving it away for free. I began an e-mail penpalship with the lead author of my Cato paper in early 2016. He asked me two years later to help write a paper with him. I jumped at the chance. Not at all because I was expecting to earn some kind of royalty for having written a paper. (I wasn’t expecting any such compensation, and anyway didn’t receive anything and didn’t dream of asking.) That never entered my mind. I got a chance to work with one of my personal heroes and earned a tiny bit of name recognition in libertarian circles.

Here is what didn’t happen. I did not get an e-mail from the Koch Brothers saying, “We need a paper defending proposition X. We will compensate you for writing said paper, as long as it toes the line.” I did not get any e-mails from Cato’s donors dictating the content of the paper or any other such interference. My guess is that this almost never happens. Most academics and commentators in the think tank space come to their interests and policy positions long before they ever find steady employment doing it. Alex Nowrasteh didn’t suddenly become pro-immigration because the Koch Brothers paid him off. Jeff Miron and Jeff Singer didn’t become anti drug prohibition because Cato cut them a check. Michael Cannon didn’t become an advocate for free-market health policies because he was bought out. These people came to their interests and policy positions and ideologies first. Of course these people are going to end up working for something like The Cato Institute. The best and brightest minds, the people with the deepest dedication to libertarian principles and the sincerest interest in policy wonkery, are going to pair up with institutions with the resources and connections that allow them to do the best work. It is simply not the case that Cato picks bland vanilla academics and pays them off to write policy papers.  The notion that these people are somehow tainted by their connection to funding is silly.

Suppose that someone’s work really is compromised by its underlying funding. I’m not saying this never happens. For example, studies published by pharmaceutical companies have a clear bias in favor of those companies’ medicines. (There is a long exposition on this topic in Medical Nihilism by Jacob Stegenga, an excellent book btw.) It’s not crazy on its face that this could happen elsewhere. I recall Michael Chertoff defending the use of body scanners on Fox News. It’s conceivable that he’s just a very principled defender of national security, but the fact that his lobbying firm represents the manufacturers of those scanners represents a clear conflict of interest. Even well-meaning people can self-deceive with a bias in favor of their own financial interests. You know what you can do about this problem? You can check their work. You see, Cato doesn’t just put out a paper outlining its conclusions and say, “We had some smart people look at some data and do some analysis, so take our word for it. This is the answer! We're the experts!” No. They publish policy whitepapers that outline and explain their arguments, provide citations defending their various claims, and generally attempt to lead a neutral outsider to the conclusions. If you know how to read and aren’t paralyzed with intellectual laziness, you can read, understand, and critique their arguments. You can point out that “this citation is irrelevant” or “this data is incorrect, and anyway doesn’t distinguish the Cato conclusion from the main alternatives” or “this argument is a non-sequitur.” Forget “follow the money”. Try “follow the argument.”

Let’s take this one concession further. Suppose you really do identify someone whose work was definitely compromised by their funding source. Maybe an e-mail gets leaked that exposes the funders putting pressure on a scholar to make a misleading argument, and the scholar caved and changed his paper because of it. Does this permanently impugn the scholar? Or the institution? I say “No.” It’s usually considered a logical fallacy to impugn an argument because of its source. It’s called an ad hominem, and anyone who has spent five minutes reading internet message boards and comments sections knows you’re not supposed to do it. Besides, “check their work” still applies here. You can uncover the bad argument just by reading the paper. Someone with a truly atrocious record of untruthfulness might reasonably be written off. But if public discourse has any kind of future, we’re going to want to avoid situations we permanently write off sources of contrary information or refuse to listen to someone’s argument. If a single dime of inappropriate funding is thought to taint someone's scholarship or integrity, I think that locks us into an impasse where we all just ignore each other's arguments and nobody ever changes their mind. If you're a skeptical-but-progressive-leaning voter or policy wonk seeking contrary information on, say, how we should run our public schools, you're going to find the highest quality evidence at some libertarian or conservative think-tank. That's naturally where the most convincing counter-arguments are being crafted and published. If you reflexively count them all out because they have a deep pocketed donor, you're going to lead a dull intellectual existence. 

Highly qualified scholars are expensive. Cato’s scholars tend to be doctors, lawyers, and economists, who can all make a lot more money working in the private sector than they can earn in the policy analysis space. (So writes this accredited actuary.) Cato doesn’t have the money to just buy these people up and keep them on staff as full-time employees. These scholars do the work because they love it and they feel like they’re fighting for a good cause. That’s how they get the Director of Undergraduate Studies at Harvard and a practicing surgeon from Arizona to do scholarship for them. The notion that they’d be able to buy these people’s integrity and compel them to make bad arguments is pretty absurd. If these individuals devoted their time and energy to professional pursuits rather than distracting themselves with Cato projects, they'd be able to make a lot more money.

Maybe this post was a waste of time. People who flippantly make ad hominem arguments generally aren't reachable. Or maybe not. I wanted to explain how a piece of "Koch funded" research feels from the inside. The nefarious influence of money just isn't there.