Sunday, July 28, 2019

Prescription Opioid Abuse Trends in the 1980s and 1990s

Most discussions of the so-called "opioid epidemic" constrain themselves to the 1999 to present period. This is mainly due to data limitations. 1999 is the first year that the CDC started coding deaths according to the ICD-10 system. The years 1978 to 1998 were coded under ICD-9 and are not directly comparable, so most tabulations start in 1999. And the SAMHSA reports, like this one, present time series of opioid abuse rates starting in 2002. I've shared this chart from the SAMHSA report in a few previous posts:

Plainly prescription opioid abuse is flat then declining over this period. (The SAMHSA survey refers to opioids with the dull term "pain relievers".) And, looking at the CDC data, it's clear that drug poisoning deaths are increasing over this period. But it makes little sense to restrict ourselves to this time frame. Attitudes about opioids and prescription practices supposedly started changing in the early 1980s, with doctors becoming more relaxed and more willing to prescribe. A very short letter published in the New England Journal of Medicine in 1980 supposedly kicked off the change in attitudes toward opioids. By starting in 2002 we would miss two decades of the relevant trend-line. There is another survey that tracks opioid use, Monitoring the Future, that goes back to the mid-1970s. But it only tracks 12th graders, who won't necessarily be representative of the older patients, people with chronic pain, limited mobility, and various infirmities, who are most likely to be affected by loose prescribing practices. Here it is, anyway:


Plausibly, this tells a story of opioid abuse rising in the 90s, from ~3% to ~9%. But then, in the 2000 to 2010 period, when opioid prescriptions tripled, the abuse rates are roughly flat, even declining. And abuse rates are trending down in the 1980s. One has to cherry pick carefully to make it look like prescription practices are correlated with opioid abuse rates. Also problematic: "Past 12 month use" patterns might be very different from, say, addiction rates or more frequent drug use rates. It could be that infrequent, casual drug use is increasing but not really causing a problem.

The easily accessible data, for the early 2000s to present, seems to debunk the standard narrative of the "opioid epidemic." My most popular post ever is a long take-down of this narrative. Also see this Cato paper, which I had the honor of helping to write. Opioid abuse rates are simply not correlated with opioid prescription rates. In fact, when restrictions on prescription opioids started to bite in 2010 to present, heroin overdoses started to skyrocket, followed promptly by fentanyl and other super-opioids. Some proponents of the standard narrative respond to this by speaking of "stocks" and "flows" of addicts. In this story, the 1980s and 1990s left us with a stock of opioid addicts. The increase in prescriptions in the early 2000s didn't much change the rate of opioid abuse, because we were already saturated. (I'm guessing here; I can't find an instance of someone coherently articulating this story.) Then, opioid restrictions starting around 2010 drove the existing stock of addicts to heroin and illicit fentanyl. Proponents of the standard narrative can still claim that, if it hadn't been for the relaxation in prescription standards starting some time in the 80s and 90s, we wouldn't have the current crisis. But this depends on speculating about what was happening during this time frame. Data is almost never forthcoming, so I've had to check for myself.

My previous attempt to reach further into the past is written up here. I excerpted a chart from Lies, Damned Lies and Drug War Statistics that shows lifetime use for "nontheraputic use of prescription drugs." Even here, the timeline only goes back to 1990, and "lifetime" use is problematic for the reasons described in the  paragraph above. Also, it includes drugs other than prescription opioids (mostly benzidiazepines, barbiturates, and ADHD medications).

Then I found these files. It's the National Household Survey on Drug Abuse going back to 1979. One can open each years' study (on the right side of the page), click on the link under "Datasets in this Study", then click on the Codebook.pdf link under Dataset Documentation, and there is a useful summary of that years' findings. Here's what 1979 to 1998 look like:

It's hard to say there's an obvious trend-line. I mean, clearly the 1998 point is higher than the point for 1979. Crudely running a regression yields a very slightly upward sloping trend-line (though the time trend is not statistically significant by the traditional p < 0.05 standard; I'm getting p = 0.60). But it just looks fishy. Note that the early surveys are only done once every three years, then start being done annually in 1990. Did opioid use really triple from 1979 to 1985, then plummet in 1988? Something about this screams "bad methodology", or at least "changing/non-comparable methodology." It seems like SAMHSA was just getting its shit together in the early days, and these data represent some kind of reporting bias rather than real trends.

Here is what happens in the 2002 to present era:

The trend in this chart matches the chart pulled from the SAMHSA report at the top of this post. "But, this chart says 3% of people used prescription opioids in the past month, at least for the flat part from 2002 to 2010. The chart at top says it's hovering around 2% for 12+ year olds. What's the difference?" This point initially confused me. The chart immediately above is "% of respondents." I think the SAMHSA report is taking this raw survey data and restating it to reflect the distribution of the American population. So if the survey over-samples young people (who use drugs at higher rates), the "% of respondents" will be high compared to actual rates of drug use in the population. I assume some smart people at SAMHSA thought of this and restated "% of respondents" to reflect "% of U.S. population." There must be significant differences in year-to-year sampling, because the downward trend from 2010 to 2014 is more intense than in the top chart. Here's a telling excerpt from the 1988 code book:
In 1979, respondents from rural areas were oversampled, and in 1985 and 1988, blacks and Hispanics were oversampled to increase the reliability of estimates of drug use of these important groups.
In this light, the three points from 1979, 1982, and 1985 make a lot more sense. Clearly the SAMHSA people think these populations differ in their rates of drug use and are changing their sample to collect the right data. But this makes "% of respondents" non-comparable from one year to the next. If someone has taken the 1979 to 2001 surveys and converted "% of respondents" to "% U.S. population", I haven't found it. (Maybe this is a useful project for some grad student. See the bottom of this post for ideas to explore this further.)

Notice another feature of this data, one which I've discussed previously: the survey changed from asking about "non-medical use of prescription opioids" to asking about "misuse" in 2015. (I have the change marked on the graph, with "misuse" years marked in blue.) I don't know why they did this. "Non-medical use" means basically recreational use. "Misuse" includes recreational use and medical use not intended by the physician. For example, someone takes more pills than recommended to treat their acute pain, because the recommended dose isn't cutting it. Or someone has left-over pills from a previous surgery and uses them for a sprained ankle. "Misuse" is a more inclusive measure than "non-medical use". It's interesting to note that the trend continues to fall after 2015 even though it's using a more inclusive definition.

I want to be fully transparent here and show you the full time series. I had initially just pulled the data up to 1998 and thought I had a good post worth sharing. But something interesting happens in 1999. In the 1979 to 1998 surveys, it asked about prescription opioid abuse using the somewhat obscure term "analgesics," while making clear that it is not asking about Tylenol or ibuprofen. This doesn't completely leave survey respondents in the dark if they don't know that word; it also asks specifically about a list of opioids (demerol, dilaudid, hydrocodone...). In contrast, the 1999 to present surveys ask about "pain relievers". If I took the numbers literally, prescription opioid abuse was ~1% in 1998, doubled to ~2% in 1999, and hit 3% by 2003, then flattened out for a decade The sudden jump to 2%, after hovering right around 1% for the prior decade or two, is almost surely an effect of the survey wording changing. I don't know exactly why it would have gone to 2% for a couple years before jumping up to 3%, rather than just jumping straight to 3% in one shot. I would think a change in survey language would cause a one-time jump. It's possible that use rates really are increasing during this period. Also, once again, the sample population may be changing, such that "% of responses" doesn't mean the same thing as "% of the U.S. population." So it's hard to say what's really happening. (Note that the figure from Lies, Damn Lies, and Drug War Statistics, which I shared here, also brackets off the 1998 to 2002 period, as if to point out to the reader there is something peculiar with those years.)



I think it's implausible that opioid abuse actually tripled on those short years, then flattened out. This doesn't match any version of the opioid abuse narrative that I'm aware of. Attitudes about opioids had already been changing for perhaps two decades. There were already moral panics against prescription opioids in 1997, to which Reason Magazine responded with this piece by Jacob Sullum. Pain patients in the 1990s were having trouble finding doctors who would treat them, and doctors who mercifully served chronic pain patients were facing criminal charges for prescribing "too many" opioids.

This is the great frustration I have with peddlers of the "opioid epidemic" narrative. They don't seem to have any kind of coherent timeline in mind. In fact, I once discussed this with someone who researches this stuff for a living, and we were trying to figure out which of several possible competing narratives they subscribe to. Are 1) normal pain patients just dropping dead from the normal use of their legitimate prescriptions? Are 2) normal patients turning into addicts, who then intentionally misuse their prescriptions? Are 3) normal patients not, in fact, dying at excessive rates from overdoses, but diversion of their pills to the black market driving an epidemic of addiction and overdoses? Or 4) do proponents of the "opioid epidemic" narrative not even have a coherent enough story to distinguish between the various competing causal chains? (The list above is by no means exhaustive, and does not contain my preferred version of the story.) These different stories make different predictions about the trendlines in the CDC overdose data and the SAMHSA drug use data, and they make specific predictions about how these data should overlap. If 1) is correct, you could see drug poisoning deaths from opioids increase without any evidence of increasing abuse or addiction rates, which is in fact what we see. (Unless we count the 1998 to 2002 tripling in "past month use" as a real trend, and I argued above that we shouldn't.) Story 2) requires seeing an increase in abuse rates somewhere along the timeline. Story 3) probably does, too, unless the story is that the population of users doesn't increase but they are all using more intensely. The problem is that journalists and politicians who tell this story never bother to nail themselves down. It's not really clear what they are claiming, so it's hard to dispute their claims. They just vaguely know that opioid prescriptions increased and that opioid-related drug poisonings increased subsequently. For a book-length version of this story that's high on anecdote and low on data, read Sam Quinones' Dreamland. It wraps together a nice story, but without actually overlaying the trends in opioid prescriptions, addiction or abuse rates, and death rates, it fails to actually say anything. It's a nice case study in how to not do policy analysis. That would require scrupulously specifying the causal mechanisms and showing that these comport with the data. (Most of the action in Dreamland is from the 1990s, not from the early 2000s when opioid prescriptions tripled, and not in the 2010s when heroin overdoses started skyrocketing. Is the 1990s when the "stock" of addicts was rising? I wish someone would clarify.)

Here is the full time series, 1979 to present.


I have it color coded to show the various changes to survey wording, explained above. It's only with hesitation that I share this, because I don't think the points from different years are measuring the same thing. But in the spirit of full transparency, here it is. If I were less honest, I might have only shared the piece that was missing from my previous posts, the 1979 to 1998 period. If I were a demagoguing drug warrior, I might emphasize the 1998 to 2003 transition as a "spike in opioid abuse" without disclosing the differences in surveys. What looks like a "tripling of opioid abuse rates" is really three intermediate data points (1999 to 2001) in between a low plateau and a high plateau. Data is never as clean as we'd like. Even in a modern, developed nation with reasonably good institutions, national vital statistics are garbage. I'm left with the dueling feelings that we should either 1) double the meager resources spent collecting and analyzing national vital statistics or 2) get completely out of the business so that we stop sparking these unnecessary moral panics. My preference is for 2), given the ease with which a spurious trend turns into a set of very bad policy prescriptions. Option 1) could in principle be done right, with an appropriately alert political class and with sufficiently diligent and self-critical journalist and sufficiently aware voters. Unfortunately for option 1), those important qualifiers are missing in the real world.

I am shocked at how hard it was to find any source for this data all compiled in one place, and yet how easy it was to actually get to it and cobble it together. Anyone could have spent about 15 minutes text searching the SAMHSA code books for the "analgesics - past month" (and later "pain relievers - past month") to pick out the relevant figure for each year the survey was done. Those data are problematic for the reasons explained above, but it's baffling that nobody even tried. The closest I ever found was the figure of "lifetime nontheraputing use of prescription drugs" from Lies, Damn Lies, and Drug War Statistics. What I've done in this post is hardly satisfactory. The raw underlying survey data is available online. (See this link, right side. Click the year you want, then click the link under "Datasets in this Study", and you'll see the survey data available in several formats.) There are a lot of columns (~2,000) to parse, the columns are poorly named, and the contents are written in code (like "1 = male, 2 = female" rather than stating the contents in plain English). But it's the kind of thing a grad student with a free summer could easily hack through. I'm surprised that nobody has thrown the resources into such a project. If they have, it's been very difficult to find. Feel free to correct me in the comments if you find a source where someone has done this.

__________________________________

Please allow me to totally geek out for a moment here. If someone wanted to take this data and convert "% of respondents" to "% of the population", it wouldn't be that hard. All you'd have to do is run a few regressions. The surveys contain various demographic variables, like age, gender, race, and martial status. The regression models would use these as variables as predictors and targets "past month use" as the dependent variable. Each year's survey could have its own regression model, which characterizes the "past month use" rates for that year. Then one can simply create a synthetic data set that represents the demographic distribution for each year (say, "0.2% of the population is white, male, unmarried, 16; 0.3% of the population is white, male, unmarried, and 17, ...") and get the regression's predicted drug use rates for each demographic, then weight them together for a total population use rate. Alternatively, if the goal is to remove the effect of changing demographics, you could use one year's distribution of demographics and apply each year's regression model to this data set. I keep saying "regression", but I'd be tempted to use a gbm or some other kind of tree-based model for this project. A process like this would make the survey data comparable across years. It should flatten out the 1979 to 1988 data points, or otherwise reveal a real trend if there is one. Anyway, it would correct for sampling differences between years, some of which seem to be deliberate attempts to capture under-sampled populations of past surveys.

Friday, July 26, 2019

Recent Goodwill Story

Recently the local branch of Goodwill got some bad press. The story is here. I heard it on NPR, which is what my alarm clock plays when it wakes me up in the morning.

Whenever I dig into the details of a popular news story or the Outrage of the Week, I find that the dominant narrative is wrong in important ways. This one was no different. The story is being reported as: Goodwill decided to lay off all its disabled employees. See the very first sentence of the State Journal Register story above.
A day after Land of Lincoln Goodwill Industries reversed a decision to lay off workers with disabilities because the state’s minimum wage is increasing, the organization’s president and CEO submitted her resignation.
The NPR story used similar language. My immediate reaction was, "WTF? There's no way that's correct." At work, I had just been through a corporate management training event, a day-long session on interviewing skills, which included a long discussion of which things are not legal to ask in a job interview or use as qualifying criteria for a job candidate. It is completely illegal to make hiring or firing decisions on the basis of someone's disability status. (There was even a video with an actor playing the clueless hiring manager asking an older lady "Are you disabled?", and the lady making an annoyed face.) You can state the physical requirements of the job and ask if the candidate can handle them, and presumably you can fire someone after their job performance makes clear that they can't handle a job. But the news story was making it sound like Goodwill identified all of its disabled employees, marked their personnel file with a big red "D", and announced it was going to fire them. That's not what happened.

Goodwill actually runs a special training program for the disabled, ex convicts, and other people who have trouble finding meaningful employment. See their own description of their program here. Or read about Jonny at the bottom of this document. (These are Goodwill sources, so they may be biased, but if you find a well argued piece that's critical of Goodwill feel free to share it.) These are not traditional employment, so they have an exemption from the minimum wage.
Section 14(c) of the FLSA allows employers to pay wages below the federal minimum to employees who have disabilities that directly affect their job performance. Employers are able to do this through a special minimum wage certificate obtained from the U.S. Department of Labor’s Wage and Hour Division.
Some commentators try to argue that Section 14(c) is just a "loophole" that is cynically used by employers to exploit disabled workers. But this is wrong. The sad truth is, having a disability (depending on the disability) makes you generally less productive, and thus less valuable to an employer. Many of these people would not find employment at all if all employers had to pay them the minimum wage. Section 14(c) was explicitly built into the Fair Labor Standards Act because even advocates of the (then new) minimum wage realized it would throw the least productive members of society out of work. See this (generally critical) paper:
Section 14(c) of the FLSA included an important exception to the innovative minimum wage for people with disabilities 21 that, at the time, did not alarm the legislature. It was based on definitions and classifications set forth in the National Industrial Recovery Act (NIRA) of 1933. Under NIRA, President Roosevelt defined a person with a disability as one "whose earning capacity is limited because of age, physical or mental handicap, or other infirmity." Section 14(c) stated:
The Administrator, to the extent necessary in order to prevent curtailment of opportunities for employment, shall by regulations or by orders provide for.. .(2) the employment of individuals whose earning capacity is impaired by age or physical or mental deficiency or injury, under special certificates issued by the Administrator, at such wages lower than the minimum wage . 
Citations omitted.

So even Roosevelt was conceding (quite explicitly) that certain conditions make workers less valuable and built in an escape hatch to spare them the disemployment effects of the minimum wage. I believe some critics of 14(c) think that these employees will all keep their jobs if we did away with it, they'd just make more money. That's a pretty implausible assumption.

(By the way, I hate this usage of the term "loophole." Section 14(c) is a feature of a law that's doing exactly what it's supposed to be doing, not some clever hack that wasn't intended by the authors.)

Back to Goodwill. They are running a program where disabled people self-identify in order to get job training and some experience (and, plausibly, a sense of purpose in a life that would otherwise be spent in unemployment). If they have disabled employees who got their job the normal way, going through the usual application process, these people would not have been targeted for layoffs. It's not like Goodwill grabbed everyone in a wheelchair or in crutches and ushered them out the door; they are running a charity and decided to be slightly less charitable along one dimension. Legally, Goodwill has to be agnostic about their regular employees' disability status, even if it's something obvious.

In a statement, Goodwill had mentioned rising minimum wages as a factor in their (now reversed) decision to lay off employees under their 14(c) program. Some people were quick to criticize this rationale, because 14(c) explicitly allows them to pay less than the minimum wage. But Goodwill is right to be concerned about minimum wages, because there is a lot of political activism aimed at ending this exemption to the minimum wage. In fact, the United States House just recently passed a bill that would 1) increase the minimum wage to $15/hour and 2) removes the exemptions available to some workers. (The Reason piece doesn't mention Section 14c, but this Reuters piece makes clear that that's what the bill is targeting.) This likely won't pass the Senate, so probably won't become law. But Goodwill is surely following these efforts and trying to get ahead of them. If they suddenly have to adopt all of their job trainees as full employees and pay them $15/hour, that's likely to be a massive financial hit, possibly a fatal one. I hope all charities are as scrupulous about managing their finances as Land of Lincoln Goodwill. Some clueless commentators also pointed out that Illinois' minimum wage hasn't even started to increase yet. The $15/hour minimum will be phased in over the next several years, but the first increase hasn't hit yet. This criticism makes no sense. Businesses and (presumably) charitable organizations do long-term planning. They look ahead to manage their expenses. If they know that a minimum wage increase is coming and will soon increase their labor costs, they will start responding to it now with layoffs and other forms of cost curtailment.

(This is actually a major criticism of the minimum wage literature. Many studies find "no effect" on employment, but any effect is likely to be understated because employers are anticipating these kinds of changes, even before the law gets passed. They are likely to have already taken steps to mitigate the impact. It's not like they're in a binary state that's one way before the law passes and the other way after. They make probabilistic assumptions about what their future costs will look like.)

The public backlash is really unfortunate, and so are the efforts to end Section 14(c). Organizations running similar services for the marginally employed now have this hanging over their heads. They know that they can't walk back a program if it starts to become a political and financial liability. Anyone who is currently thinking about beginning or expanding such an operation is likely to have second thoughts about it now.
_____________________________

Maybe it's just Goodwill propaganda and I'm falling for it, but here is their story about Jonny:
But getting to Goodwill wasn’t easy. Johnny was born in the late 1970s with a rare trisomy chromosome imbalance, which limits his speech and cognitive abilities, in addition to other developmental and physical disabilities. When he was 10 years old, he was assaulted by an adult caregiver and became scared, withdrawn and rebellious. For many years, he wouldn’t go out in public or speak to people other than his father — the only person he trusted. But his father didn’t give up. In fact, he became a passionate advocate for his son.
Over the years, Butch left no stone unturned in seeking help for Johnny, and he even moved to Dallas, OR, where his son could live in a facility that he’d heard was “the best.” But Johnny wasn’t receptive to the help of the facility staff and wouldn’t talk to anyone. When the Butch first learned of the programs at Goodwill Industries of the Columbia Willamette (Portland, OR), he was hesitant, but gave them a try.
Johnny enrolled in the Goodwill’s Community Integration Project II, which provides employment and vocational training to people with multiple and/or severe disabilities under a special minimum wage certificate. The Goodwill’s staff recalls that when Johnny first entered the program, he was crying and shaking. But through training, he learned basic vocational skills and appropriate workplace behaviors. 
“Johnny has transformed from a frightened and profoundly insecure person into a confident and integrated young man,” says Michael Miller, the agency’s president and CEO. “His success today is a product of his father’s devotion, coupled with Goodwill’s intervention.”
Maybe Jonny is a cherry-picked example of the most sympathetic individual Goodwill could find, and I'm naively falling for their trick. But there are certainly people like Jonny who wouldn't be able to find meaningful employment at the current minimum wage (much less the absurdly irresponsible $15/hour some activists are peddling). I think about the pan handlers I see downtown on my lunch break. Some of them have obvious disabilities. It's hard to imagine any employer taking a risk on these people knowing they'd have to pay $15/hour. It's unlikely that most of these people could add that much productivity to the employers bottom line.

I've said quite a lot in this post about worker productivity, the value of an employee to the employer. I hope no reader mistakes this for the value of the person. It's not a statement about a person's moral worth or the value they bring to their friends or family. My very young children are incredibly valuable to me, but of no value to any employer. In fact, they probably would impose negative returns on any employer trying to coax meaningful work out of them, given the amount of instruction, monitoring and double-checking required to get a task done. Your value to an employer is not the same as your moral worth (however you might measure the second thing). Think what a non-sequitur it would be for a parent to drop off their teenage son to work at McDonalds and then get indignant that their child was "Worth infinitely more than $7.25 an hour!" ("Um, perhaps you are confused about the nature of this transaction. I'm not trying to buy your son from you, ma'am. I can only afford to pay him what he adds to this store's revenues, at most. Which, unfortunately, is not that much.")  This seems like an obvious point, but I've seen this mistake enough times that I wanted to preempt it. People need to drop this idea that your inherent moral worth as a human being imposes a duty on an employer, the duty to pay you some minimum amount for your labor. No, that depends entirely on what you add to the employer's bottom line. It's a morally neutral concept. By moralizing this concept, some misguided activists are saddling us with bad policy and casting inherently employable job-seekers out of work. To expect our value judgments to be fully reflected in market prices is just crass materialism.

Wednesday, July 17, 2019

Sneers About “The Koch Brothers” and "Koch Money"

It’s disheartening that name-calling is sometimes accepted as a serious argument in modern political discourse. There are plenty of examples of this behavior, but I have in mind the sneer that some commentator or some piece of scholarship is “Koch funded.” Sometimes it is sufficient to merely insinuate that there is a tenuous connection to the Koch brothers. For example, a scholar who once published something with Cato but is now working for some other outlet, perhaps even speaking his own mind, not on behalf of any institution, can be permanently slapped with the “Koch money” sneer. Of course, this sneer isn't specific to the Koch family; there are plenty of morons droning on about "Soros money" instead of engaging meaningfully with the arguments.

This behavior is so infantile I’m tempted to just not react to it, just as I would ignore a tantrum-throwing child. But then again it happens often enough that it’s worth responding. I recently saw Michael Cannon at Cato on a C-SPAN event. He was discussing health policy. A viewer called in just to say that Cannon shouldn’t be listened to because he’s associated with Cato, and that the Cato voice shouldn’t even get a hearing in our political discourse. Of course he babbled something about Koch money. The insinuation is always that these commentators are being paid by the Kochs to distribute their message, thus rendering them unreliable as sources of information. I want to explain just how utterly wrong this is.

I recently had my name on a paper published at Cato, something for which I am very proud. It was a short paper on the so-called opioid epidemic, basically explaining why the standard narrative is wrong and the policy implications are pretty much the opposite of what some careless commentators have inferred. I have been writing about this since early 2016. I have numerous blog posts explaining why I’m skeptical of the standard story. I have done a deep dive on the CDC’s mortality data, and on the pages of this blog I have posted some novel (novel as far as I can tell) pieces of analysis on that data.  I’ve been happily giving it away for free. I began an e-mail penpalship with the lead author of my Cato paper in early 2016. He asked me two years later to help write a paper with him. I jumped at the chance. Not at all because I was expecting to earn some kind of royalty for having written a paper. (I wasn’t expecting any such compensation, and anyway didn’t receive anything and didn’t dream of asking.) That never entered my mind. I got a chance to work with one of my personal heroes and earned a tiny bit of name recognition in libertarian circles.

Here is what didn’t happen. I did not get an e-mail from the Koch Brothers saying, “We need a paper defending proposition X. We will compensate you for writing said paper, as long as it toes the line.” I did not get any e-mails from Cato’s donors dictating the content of the paper or any other such interference. My guess is that this almost never happens. Most academics and commentators in the think tank space come to their interests and policy positions long before they ever find steady employment doing it. Alex Nowrasteh didn’t suddenly become pro-immigration because the Koch Brothers paid him off. Jeff Miron and Jeff Singer didn’t become anti drug prohibition because Cato cut them a check. Michael Cannon didn’t become an advocate for free-market health policies because he was bought out. These people came to their interests and policy positions and ideologies first. Of course these people are going to end up working for something like The Cato Institute. The best and brightest minds, the people with the deepest dedication to libertarian principles and the sincerest interest in policy wonkery, are going to pair up with institutions with the resources and connections that allow them to do the best work. It is simply not the case that Cato picks bland vanilla academics and pays them off to write policy papers.  The notion that these people are somehow tainted by their connection to funding is silly.

Suppose that someone’s work really is compromised by its underlying funding. I’m not saying this never happens. For example, studies published by pharmaceutical companies have a clear bias in favor of those companies’ medicines. (There is a long exposition on this topic in Medical Nihilism by Jacob Stegenga, an excellent book btw.) It’s not crazy on its face that this could happen elsewhere. I recall Michael Chertoff defending the use of body scanners on Fox News. It’s conceivable that he’s just a very principled defender of national security, but the fact that his lobbying firm represents the manufacturers of those scanners represents a clear conflict of interest. Even well-meaning people can self-deceive with a bias in favor of their own financial interests. You know what you can do about this problem? You can check their work. You see, Cato doesn’t just put out a paper outlining its conclusions and say, “We had some smart people look at some data and do some analysis, so take our word for it. This is the answer! We're the experts!” No. They publish policy whitepapers that outline and explain their arguments, provide citations defending their various claims, and generally attempt to lead a neutral outsider to the conclusions. If you know how to read and aren’t paralyzed with intellectual laziness, you can read, understand, and critique their arguments. You can point out that “this citation is irrelevant” or “this data is incorrect, and anyway doesn’t distinguish the Cato conclusion from the main alternatives” or “this argument is a non-sequitur.” Forget “follow the money”. Try “follow the argument.”

Let’s take this one concession further. Suppose you really do identify someone whose work was definitely compromised by their funding source. Maybe an e-mail gets leaked that exposes the funders putting pressure on a scholar to make a misleading argument, and the scholar caved and changed his paper because of it. Does this permanently impugn the scholar? Or the institution? I say “No.” It’s usually considered a logical fallacy to impugn an argument because of its source. It’s called an ad hominem, and anyone who has spent five minutes reading internet message boards and comments sections knows you’re not supposed to do it. Besides, “check their work” still applies here. You can uncover the bad argument just by reading the paper. Someone with a truly atrocious record of untruthfulness might reasonably be written off. But if public discourse has any kind of future, we’re going to want to avoid situations we permanently write off sources of contrary information or refuse to listen to someone’s argument. If a single dime of inappropriate funding is thought to taint someone's scholarship or integrity, I think that locks us into an impasse where we all just ignore each other's arguments and nobody ever changes their mind. If you're a skeptical-but-progressive-leaning voter or policy wonk seeking contrary information on, say, how we should run our public schools, you're going to find the highest quality evidence at some libertarian or conservative think-tank. That's naturally where the most convincing counter-arguments are being crafted and published. If you reflexively count them all out because they have a deep pocketed donor, you're going to lead a dull intellectual existence. 

Highly qualified scholars are expensive. Cato’s scholars tend to be doctors, lawyers, and economists, who can all make a lot more money working in the private sector than they can earn in the policy analysis space. (So writes this accredited actuary.) Cato doesn’t have the money to just buy these people up and keep them on staff as full-time employees. These scholars do the work because they love it and they feel like they’re fighting for a good cause. That’s how they get the Director of Undergraduate Studies at Harvard and a practicing surgeon from Arizona to do scholarship for them. The notion that they’d be able to buy these people’s integrity and compel them to make bad arguments is pretty absurd. If these individuals devoted their time and energy to professional pursuits rather than distracting themselves with Cato projects, they'd be able to make a lot more money.

Maybe this post was a waste of time. People who flippantly make ad hominem arguments generally aren't reachable. Or maybe not. I wanted to explain how a piece of "Koch funded" research feels from the inside. The nefarious influence of money just isn't there. 

Sunday, June 16, 2019

Integrated All-Cause Mortality

Don't pay for fancy actuarial tables anymore. Here it is for free:



Enjoy! I will try to keep this updated annually.

Monday, June 3, 2019

Beware the Man of No Theory

Scott Alexander has a great post from a few years ago titled Beware the Man of One Study. You should read the whole thing yourself, or listen to it on the SlateStarCodex podcast, which is basically a podcaster named Jeremiah reading Scott’s posts. (Did you know there was such a thing? Pretty sweet, right?)

Scott warns against putting too much faith in any single study. He even points out that you can’t trust any single meta-study and points to conflicting meta-studies on the minimum wage reaching opposing conclusions. In the academic minimum wage wars, one side can present a letter signed by 500 economists opposing a minimum wage increase, while the other can present a letter signed by 600 economists supporting it. There is simply no consensus about whether the minimum wage is good or bad on net. No single study, in fact no single body of work, proves definitively one way or the other. 

Scott then presents a funnel plot, which is evidence of publication bias. (I wrote about this topic wrt climate sensitivity.) Take a look at the figure in Scott's post; it references Doucouliagos and Stanley (2009), which I presume is a paper titled Publication Selection Bias in Minimum Wage Research? (with the question mark as part of the title). When I looked at this figure, I thought the scale on the x-axis was crazy. It spans from -20 to +10. It looks like there’s a point at +5, meaning a 10% increase in the minimum wage results in a 50% increase in employment. Of course that’s nuts, and its position low on the y-axis tells you it’s not a credible estimate. Neither are the very large negative values. The academic debate about the minimum wage isn’t about whether the elasticity is -5 or +1. It’s about whether the elasticity is closer to -0.1 or zero, or perhaps even very slightly positive. The scale of the x-axis obscures where the action is at. Just eye-balling the figure won’t tell you whether the bias-corrected average is zero or -0.1 (an estimate preferred by Neumark and Wascher’s book Minimum Wages), because the scale of the x-axis doesn't allow your eye to make out differences that small.

 I don’t know where Scott got that figure, but here’s the same chart from a published version of the (presumably same) paper.



On this scale it’s clearer that the bias-corrected estimate should be close to zero. The paper actually gives some estimates of the bias-corrected elasticity and a discussion of their statistical model. See Table 3. Depending on the exact model specifications, this is telling us that a 10% increase in the minimum wage results in a 0.09% decrease in employment, or a 0.04% decrease, or a 0.06% decrease…



Let’s say I totally buy the “publication bias” story and want to use these as my bias-corrected estimates. Does anyone really think we can increase the minimum wage by 100% and it will only cause a 0.9% reduction in employment? In other words, doubling the minimum wage will not even cause 1% of the affected workers to lose their jobs? Will a 300% increase (raising the federal minimum to $29/hour) only cause a 3.6% decrease in employment? Does anyone think these are remotely sensible estimates for the size of the effect?

I'll augment Scott's warning about the "Man of One Study." Beware the Man of No Theory. Don’t trust anyone who says that their opinions are all “evidence based” or that they’ve crafted their worldview simply by looking at “the data.” There is no such thing as a theory-free interpretation of "the data." We always need some kind of grounding in common sense, mathematics, logical consistency, and various academic disciplines to inform our interpretation of the data presented to us. In this case, the common-sense notion that "When the price goes up, people buy less" should ground us in reality. (I hope it's very clear that I'm not accusing Scott of being a "Man of No Theory." As far as I can tell he doesn't fall for naive empiricism.)

If a minimum wage advocate said "We can increase the minimum wage from $7.25/hour to $29/hour and it will only cause a 4% reduction in employment," something has gone wrong. Even pro-minimum wage economists who advocate increasing the minimum tend to suggest modest increases and issue caveats about how an elasticity measured for a small increase doesn't necessarily apply to a large increase. Read what the pro-minimum wage economists actually say. They don't extrapolate the elasticity estimates very far beyond the (usually modest) minimum wage increases that they are calculated from. Arin Dube suggests half of the median wage as a reasonable target for the minimum wage. And his congressional testimony offers many suggestions for how businesses deal with minimum wage hikes without having to fire workers. He and economists like him are clearly grounded in the Econ 101 framework. They feel some need to explain why that story doesn't apply to the low-wage labor market, and the careful ones will often caution that it does apply when you raise the minimum high enough. In other words, they are grounded in theory. They're not coming from a place where any result makes as much sense as any other. They are not naively empirical.

(Read Dube's written testimony to congress. There is a discussion of how employers can adjust by hiring higher-skilled workers. There is discussion of how employees engage in more intense job search and hang on to jobs longer than they otherwise would, absent a minimum wage hike. It's like he's acknowledging that the Econ 101 story should be true, that it's a perfectly reasonable a priori assumption, and the apparent null effect on employment is a mystery that needs explaining. He doesn't pretend like the null result needs no explanation at all, as if we can just totally ditch economic theory and common-sense intuitions about how the labor market should work.)

Not all points on that funnel plot are created equal. It turns out that there is some very recent research, using a richer and more detailed dataset than anything that was previously available, studying a relatively large increase in the minimum wage, that shows significant disemployment effects. It yields an elasticity greater than 1, which would make it an outlier on the funnel plot above. But that same study also duplicates the "no significant effect" result when it restricts itself to the data available to other studies. The new study has access to everyone's actual wages (hours worked and total earnings) before and after the minimum wage hike. It doesn't have to rely on proxies for "minimum wage worker", like "restaurant workers" or "teenage workers", not all of whom are minimum wage workers. And it can detect changes in "hours worked", which is presumably more responsive than actual job losses.  Maybe I'm ignoring Scott's advice and being a "man of one study" here (actually two separate papers by the same group of researchers on Seattle). But when a ground-breaking new study 1) uses a much richer dataset 2) finds a result that is more consistent with theory than previous work and 3) replicates results of prior work when it restricts its dataset to the detail available in previous studies, I tend to put a lot of faith in the new study. You need some theory to tell you which studies are more credible. I believe the following: "Having access to actual wages of individual workers gives me a better estimate of the effects on low-wage workers than using a crude proxy, like 'teenagers' or 'restaurant workers'." I could do some crude back-of-the-envelope, assuming a known real effect of a minimum wage increase, and showing that the crude proxy for minimum wage workers yields a smaller, less statistically significant result. I'd be using a little bit of math and some statistics. I'd be using theory to inform my worldview.

Scott says of the funnel plot,
The bell skews more to left than to the right, which means more studies have found negative effects of the minimum wage than positive effects of the minimum wage. But since the bell curve is asymmetrical, we interpret that as probably publication bias. So all in all, I think there’s at least some evidence that the liberals are right on this one.
Emphasis mine. I'm not sure if Scott is saying the liberals are right that there's a publication bias present (in which case, they are right), or if he's saying that they're right to ignore the effects of minimum wage on employment (in which case, they aren't right). If it's the latter, I'm going to push back on this hard and say, No, theory still matters.

"All these minimum wage studies show that raising the minimum wage has no effect on employment."
"All these minimum wage studies show that 'teenage workers' and 'restaurant workers' are a poor proxy for 'minimum wage workers'."
"All these minimum wage studies show that jurisdictions only raise the minimum wage when the local labor market is ready for a hike. Jurisdictions that are likely to have negative labor market impacts anticipate this and decline to raise the minimum wage."
"All these minimum wage studies show that businesses can effectively anticipate a minimum wage hike, given the time it takes to make it through the political process and the phase-in attached to most legislation."
"All these minimum wage studies show that reality is very messy, with many causal factors coming together at once. Real effects can be socially significant but still be numerically small, swamped by noise in our measurements. In this kind of world, it's hard to measure the effect of one thing on another thing."

All of these explanations are consistent with the new minimum wage research. You need theory, a worldview informed by some kind of prior belief, to decide which ones are most appropriate.

By the way, the Congressional Budget Office (CBO) got a larger estimate for elasticity when adjusting for publication bias:
 On the basis of that review, CBO selected a central estimate of that elasticity of -0.075; in other words, a 10 percent increase in the minimum wage would reduce employment among teenage workers by three quarters of one percent.

Second, CBO considered the role of publication bias in its analysis. Academic journals tend to publish studies whose reported effects can be distinguished from no effect with a sufficient degree of statistical precision. According to some analyses of the minimum-wage literature, an unexpectedly large number of studies report a negative effect on employment with a degree of precision just above conventional thresholds for publication. That would suggest that journals’ failure to publish studies finding weak effects of minimum-wage changes on employment may have led to a published literature skewed toward stronger effects. CBO therefore located its range of plausible elasticities slightly closer to zero—that is, indicating a weaker effect on employment—than it would have otherwise.

Monday, May 27, 2019

In Defense of Price Optimization In Insurance Pricing

Traditional Insurance Price Regulation

There is a long tradition of regulation in personal lines insurance. The state department of insurance, or DOI, has the authority to approve or disallow the insurer's rates, depending on statutory authority, although sometimes they go well beyond their statutory authority in restricting how insurers can rate. I wrote about the process here and here. The insurer must file a "rate filing" with the DOI whenever they wish to change their rates, and all rate changes require some kind of actuarial justification. Insurers set the overall price, as in the amount they need to cover total expenses and claims and earn a (usually slim) profit margin, and also the relative price of their various customers, as in the rate differential between 16-year-olds and 40-year-olds or the difference between people who have recently had accidents and people who haven't.

Traditionally the justification is based on expected cost. Setting the overall rate level depends on how well historical premiums have covered claims and expenses. A calculation based on these numbers, along with some sensible adjustments (trending costs for inflation, adjusting historical premiums to the current rate level, etc.) will tell the insurer, for example, "We need to increase rates in Illinois by 3% this year." (This calculation that yields the indicated rate increase is often simply called "the indication.") The relative prices between insurance customers is usually determined by some kind of predictive model. "My generalized linear model tells me that 16-year-olds are 3 times as likely to have an accident as my 40-year-olds. Multiply the base premium by a factor of 3 to get the 16-year-old's rate. Repeat for all rating characteristics..." There is often controversy about the use of certain rating characteristics. Some people think it's unfair to use credit history in insurance pricing, even though from a purely predictive standpoint it is a highly significant predictor of future claims. Certainly it's not allowable to use race anywhere in rating, although there is some discussion of whether some rating variables are a proxy for race. Zip code correlates with race, for example, so some people argue that insurers are sneakily pricing for race without admitting they're doing so, using their territorial rates as a proxy. Some people make the same argument about credit: credit correlates with race, with some races having generally poorer credit scores compared to whites. I happen to think this is wrong; allowing credit-based and location-based pricing means you can identify good and bad risks regardless of their race. In other words, it allows insurers to, say, write a lot of business in predominantly black zip codes because credit history allows them to identify the good risks in those zip codes. They'll even write the bad risks in those zip codes if they can determine the right price for them, and credit history makes this a lot easier to do. Absent credit, the same insurer might avoid that zip code by not placing an agent there or not directing its marketing activity there. (States can regulate the pricing, but there's no way they can tell them "You must place an agent in this zip code, and you must direct a marketing campaign to this one." I don't think there's any version of this mandate that would pass constitutional muster.) All this controversy aside, most insurers have a relatively free hand in using credit history and location (zip code-based or otherwise) to set prices, so long as they can show that their rates correlate strongly with actual claims.

Price Discrimination and Price Optimization

Enter price discrimination. Price discrimination is the practice of charging two otherwise identical people different prices based on their willingness to pay, their "elasticity of demand." That is, I'm going to offer a price lower than my base price to that guy, because otherwise he won't buy what I'm selling, even though I already get a lot of customers at the base price. Unfortunately, most people describe this practice using the converse and feel moral indignation at the thought of ever getting charged more than the lowest possible price. As in, "I'm going to charge you more, because you are willing to pay more for this service." Both framings are technically accurate, but the second is so fraught with emotional baggage that I prefer to avoid it. Price discrimination actually lowers overall costs for customers as a whole. If I can attract more customers by offering different prices, that means I have a bigger customer base over which to distribute my fixed costs. The overall price level is lower, even though in some particular cases some individuals might be paying more than they would in a world of flat prices. That person who is seething with indignation over being denied a discount is probably paying a lower price than he would if that discretionary discount didn't exist.

In property and casualty insurance (P&C, meaning home and auto in this context), price discrimination is a hot topic. It's called Price Optimization (although the two terms aren't exactly synonymous; more on that below). To many regulators it's a big no-no. Because pricing is so heavily based on traditional actuarial methods, the language of statutes and regulations typically reference "loss costs" and expenses and "actuarially sound rates" (which implicitly means something that is cost-based and not demand-based). Many regulators are actuaries, and they are relying on the language of actuarial standards of practice for guidance. In this sense, actuarial accrediting societies (and I'm a member of one) can be regulators by proxy. If the standards of practice only ever reference "loss cost" and never mention propensity to buy ("elasticity of demand"), then regulators relying on these standards of practice will not allow for it. Indeed, under a strict reading, the relevant standards don't allow for price discrimination. When the CAS (Casualty Actuarial Society) tried to rewrite a standard of practice to allow for rating based on considerations other than loss costs, some prickly "watchdog" groups complained loudly. (The CAS was in a tough spot here. On the one hand, they didn't want to be in a position to say price optimization is contrary to actuarial principles, such that actuaries using it should be sanctioned for malpractice. On the other hand, they didn't want to be the ones to green-light price optimization everywhere. I suppose this is one of the hazards of being a guild; you are sometimes the de facto regulator of an industry and it falls on you to make difficult decisions.) States differ in how strict they are, but some states have statutes that explicitly forbid price optimization and others have regulators who assume it is implicitly forbidden by traditional standards of practice (perhaps interacting with existing regulation, which might reference "actuarially sound rates" or "unfairly discriminatory rates" from those standards).

By the way, I understand why people don't like price optimization. I hinted at this above: people hate the feeling that they aren't getting the lowest possible price for something. If I explained price optimization to the average insurance customer, I'm sure they'd balk. So regulators and "watchdog" groups are responding to the impulses of typical insurance customers. (Scare quotes around "watchdog" because many of these agencies act in ways that are contrary to the interests of consumers, as is the case here.) Don't get the impression that I'm some ideologically blinkered libertarian saying, "Gee, why would anyone ever want to regulate markets?" Or some antisocial economist saying, "Gee, why wouldn't consumers want a perfectly efficient market?" Or some morally compromised data scientist saying, "Gee, why don't consumers appreciate the beauty of my glorious pricing model?" I get it. My response is, What do consumers know anyway? Consumers balk at all kinds of commercial activity, even though economists can usually come up with "efficiency" justifications for those behaviors. In fact, economists often conclude that we'd be much worse off if those unpopular practices were outlawed. (We'd be far worse off if the government banned something every time a consumer felt indignant; read Defending the Undefendable by Walter Block for a long list of legitimate business practices that average people get indignant about.) If "efficiency" sounds bloodless, bear in mind that it usually means lower overall costs for consumers and more products available. Just so with price optimization.

Price Optimization On the Overall Rate Level

Let me start by defending the practice of measuring demand elasticity, which basically means the propensity of a customer to leave one insurer for another based on the magnitude of a price change. Suppose I'm the actuary in charge of rates in the state of Illinois. I do some actuarial calculations and determine that prices need to rise by 10%. I have some rating software that re-rates my book of business (meaning the full set of our insurance customers) at the new, higher rate level. I proudly report that this rate change will increase our Illinois revenue by 10%. Except this is wrong. It assumes that we retain 100% of our customers after the rate increase. I should know how much premium we're actually going to get if I increase rates by 10%, accounting for the propensity of policyholders to shop for insurance elsewhere. In fact, I have a duty to upper management, to the shareholders, and ultimately the customers (who are relying on the company remaining solvent) to accurately estimate the effect on revenues. At the very least, I should calculate elasticity so I can calculate the effect of a rate change on customer retention, which will give me a more accurate estimate of the effect on revenue. (I'm sort of using "premium" and "revenue" interchangeably here, btw, though insurers get revenue from sources other than their customers' premiums.) I don't want to say, "We're increasing premiums by 10%" when it's within my power to provide a better estimate. Suppose I say we're taking 10% but it's only 5% when factoring in retention effects. That's bad. It makes it harder for a company to plan for the long-term. Insurance companies need to be making these decisions with their eyes wide open, not making absurdly unrealistic assumptions, assumptions that can easily be relaxed with a moderately complex calculation. Actuaries are the guardians of capital at insurance companies. We're supposed to analyze risk and safeguard the billions of dollars of stockholder capital. We're supposed to ensure there is enough money held in reserve to pay policyholder claims for the indefinite future. We're supposed to do these kinds of estimates. And we're supposed to make them as accurate as is feasible.

This is where it gets sticky. Suppose my retention calculation affects the company's decision about how much to increase rates. An executive who sees that a 10% rate increase only leads to a 5% premium increase might say, "Okay, let's only take an 8%. Show me what that looks like." Or it could go the other way. Maybe my Illinois customers are relatively inelastic, and I could take a 15% rate increase and get pretty close to the 15%. The executive might use this information to take a rate increase that's larger than what's actuarially justified. In practice this is usually limited by the historical data and the actuarial methods. Actuaries have to calculate an indicated rate increase (once again, "the indication"), and DOIs usually don't allow you to go above them. (There is some amount of play here; maybe I can make a few adjustments and turn a 10% into an 11%. But I can't make the indication go arbitrarily high, and even eking out more than a couple of percentage points is unlikely.) But they don't mind you going below the indication. This question of "How far below my indicated rate increase can I deviate?" is where price optimization comes in. Traditionally insurers use rules of thumb to make these kinds of decisions. The indicated rate increase, based entirely on actuarial calculations straightforwardly applied to historical premium, loss, and expense data, is often higher than what's actually reasonable. That executive might say, "Hmm, a 10% increase is too high and will cause a lot of disruption in our book. Let's take 5% instead." ("Disruption" meaning lots of customers non-renewing their insurance policies.) State DOI's are usually accepting of these hand-waving statements about "We're not taking the full indicated rate increase because we're worried about policyholder disruption." But they are very opposed to us doing an explicit calculation to optimize the rate increase. The "rule of thumb" and the explicit calculation are both forms of price optimization, it's just that the former is much cruder. I don't think DOI's should be in the position of saying, "You can do X, as long as you do it crudely and inaccurately. If you get more sophisticated about doing X, we'll punish you." That is essentially the line some DOI's have taken with respect to price optimization.

Price Optimization at the Individual Customer Level

The previous discussion is about the overall rate level. Do I increase rates by the traditionally-indicated 10% or the elasticity-indicated 5%? A more sophisticated version of price optimization involves adjusting the rate for individual policyholders based on their willingness to pay. This basically takes the overall rate effect as a given, but allocates the rate impact based on retention considerations. I can build a predictive model that tells me "This group of customers will leave if I give them a 2% rate increase, but this group of customers won't leave even if I give them a 5% increase. I'm going to allocate more of the rate impact to the less elastic group." (Of course, these are all probabilistic statements. The model doesn't say, "Joey will definitely leave if we increase his rates 10%", but rather something like "Joey's probability of retention will fall from 90% to 80% if I increase his rates by 10%." Optimization is done on the basis of expected values, not "Will he leave? Yes/No?") This practice inspires some unwarranted fears that insurers will identify inelastic groups of people and permanently charge them a high rate. "Hmm. It turns out that soccer moms and rural single men are very price inelastic. Let's just keep increasing their rates every year." That is wildly implausible. The personal lines insurance market is far too competitive for this to actually happen. There are dozens, often hundreds, of insurers in every market. If there are demographics that are systematically overcharged by the industry, someone will come along and specialize in marketing to that group and take all of the customers. (Contra ProPublica and their atrocious article about territorial pricing in auto insurance.)

Here's a more likely scenario for how price optimization would be used at the individual or demographic grouping level. I spelled it out in an earlier post, but I'll repeat the points here. Suppose I build a new predictive model that tells me the price differentials between my various customers. My 16-year-old rate came down from a 200% surcharge to a 150% surcharge. The differential between the worst and best credit individuals used to be a factor of 2, but now it's a factor of 2.5. With dozens or even hundreds of rating variables changing in terms of their indicated surcharge/discount, each individual customer is likely to get something different from the overall rate impact. Maybe the overall rate effect is neutral, 0%, but almost nobody actually gets exactly 0%. If you build a histogram of customer rate impacts, you'd get something normally distributed around 0%, with a few customers getting large premium increases and a few getting large decreases. Well, just like I have a predictive model that tells me the expected costs for each individual insurance customer, I have a model that tells me each customer's elasticity of demand. I can then adjust my surcharges and discounts to optimize something (something other than the error function of my "expected claim costs" model). I can optimize, say, "growth in policy count", or "overall profit", subject to various constraints. (This is why price optimization is not exactly the same thing as price discrimination. Price discrimination simply refers to charging different prices based on willingness to pay. The term "price optimization" in insurance refers to a broad suite of optimization routines, and demand elasticity is simply one of many inputs.) Given a long enough timeline, insurers will ultimately approach their indicated rate differentials. Price optimization simply smooths the path so as to minimize the number of customers who are lost along the way. If my indicated rate for 16-year-olds drops from a 200% to a 150% surcharge, my price optimization routine might say to make this change over the course of three or four years, rather than doing it in one jump. If my surcharge for prior claims jumps from 30% to 50%, my price optimization routine might effectively say, "You're fine to do that in one jump." And it might be because those customers aren't price sensitive and won't leave, or it might be because they are price sensitive but they're also high-cost and we don't want their business anyway. Some other insurer has the right price for them, but maybe we don't. That's a win-win. It seems unlikely that such an optimization routine would in effect say, "You can permanently overcharge married family households with a single teenage driver by 50% over the model predicted premium, because they are just that price inelastic."

Once again, traditionally DOI's have accepted these practices of deviating from the indicated rates based on concerns about disruption.
Regulator: Why is your 16-year-old factor 3.0 when your model says it should be 3.5?
Insurer: We are moving in the direction of 3.5 with this filing, but due to policyholder disruption considerations we are worried about moving the factor all the way in a single filing.
Regulator: Okay, that makes sense. (Stamps "Approved" on the filing.)
That is, they allow us to do so as long as were using crude rules of thumb and not doing an explicit price optimization calculation. Why should we be confined to the cruder version of this calculation? If more sophistication is available, why not allow it?

Another crude method of price optimization is rate capping. No single policy's premium will increase by more than, say, 15% in any one year. Clearly this is an attempt to mitigate customer disruption. If I just charged everyone the rates indicated by my new predictive model in a "Let 'er rip" fashion, the customers getting big premium increases would leave. Rate capping smooths the transition to higher rates. Again, price optimization is simply a more sophisticated method of doing something that is already a widely accepted practice.

I should point out here that price discrimination is common in every other industry. Airlines use price discrimination to set ticket prices. They might charge one customer a higher price than another on the same flight because their predictive algorithm says that the first customer is willing to pay more. And, more obviously, ticket prices generally get higher closer to the date of the flight. (Does it go in the opposite direction for flights that don't get filled? As in, "This flight isn't filling up. Let's discount the tickets.") Doctors used to charge different rates to different patients, giving away some free or low-price care to their indigent patients and making it up on their more affluent patients. (I find it interesting that this kind of "privatized redistribution" was once standard practice, but that mandatory health insurance effectively eliminates this "the rich pay a greater share of society's healthcare costs" dynamic.) I like to tell a story about my eyeglasses. The original quoted price was $425, and I must have visibly balked at this price. The sales person then said, "Of course, that's with the anti-reflective coating on the lenses. We can save $150 if you go without." I chose to opt out, thinking this was a useless add-on. They ended up making my glasses with the coating anyway, and still gave me the lower price. I assume they default to making the lenses with the coating and that it doesn't actually cost extra to add it. They just use it as a bargaining chip to win price sensitive customers who bridle at the first quoted price.

Price Discrimination Is Economically Efficient

Price discrimination generally enhances economic efficiency, because it means more customers are served. Companies are identifying price-sensitive customers and trying to attract them by offering discounts. In a flat-price world those customers don't get served, because they say "No" to the single flat price. Granted, these are customers on the edge of indifference between the money and the product. Plausibly they are reaping very little consumer surplus, somewhere close to the difference between the sticker price and the discounted price they are offered. But nonetheless the practice means more production and more served customers, which necessarily implies a greater surplus. In the case of insurance, maybe it's less plausible that price discrimination allows more "production", but it is still welfare enhancing. From the point-of-view of the insurer, they need to collect $X from their customers to cover their costs. Insurers are identifying people who don't mind paying and allocating slightly more of the $X to them, and slightly less to people who would mind paying.

Regulators Should Permit Price Optimization

Regulators ought to allow insurance companies to do sophisticated price optimization. They need to stop treating deviations from the pure risk-based price as something sordid or unethical or necessarily contrary to sound actuarial principles. Some states have passed statutes that explicitly ban price optimization. In those states the regulator's hands are tied. In other states, regulators have simply assumed they have the authority to ban price optimization. They will hold up or disapprove filings that employ these methods. Regulators should stop assuming authority that goes beyond the literal language of their state's statutes. As I hinted at above (and described in detail in a previous post), regulators will often broadly interpret statutory language. Often the law that grants the state the authority to regulate insurance will make reference to "actuarially sound rates" or say that rates shall not be "unfairly discriminatory," and this language often echoes actuarial standards of practice. Unfortunately, some regulators decide that anything they don't like is "unfairly discriminatory." Insurance pricing is discriminatory by its very nature, and it has to be. An insurer must charge higher rates to 16-year-olds and people with poor credit, otherwise they will get only 16-year-olds and customers with poor credit, their claims frequency will explode, their losses will spiral out of control, and they will eventually go insolvent (or perhaps become a niche company that only insures 16-year-olds and other very poor risks, but they would needlessly bleed capital in the process of reaching that equilibrium). If you want to see what insurance without risk-based pricing looks like, look at the disastrous market for health insurance. Or look at Medicare and Social Security, which are prone to shocks from changing demographics. Appropriately priced insurance is necessarily discriminatory, so statutes that reference "unfairly discriminatory" rates leave us at the mercy of a regulator's arbitrary opinion of what's "unfair." I have heard current and former regulators describe disapproving or holding up a rate filing because they just didn't like a new rating variable (e.g. an auto surcharge based on prior homeowners insurance claims), even though they had no explicit authority to ban it. Many of these regulators assume price optimization  is banned by default. They push back against attempts to use price optimization because they just don't care for it, even if officially they might cite boilerplate statutory language about "unfarily discriminatory" rates to justify their decisions. Insurers need a free hand to charge appropriate rates and manage their books of business. They need to be able to innovate and make decisions about their idiosyncratic risk portfolios without being held hostage by arbitrary regulators. If a state passes legislation that officially bans a particular rating variable or outlaws differential pricing based on demand elasticity, that's another matter. Of course it's the regulator's job to apply the statutes. But other than that, they should stop hindering innovation in the price optimization space by insisting on strictly risk-based pricing. They should resist knee-jerk consumer reactions that such-and-such a surcharge "seems fishy" or is unfair.

Insurance customers generally have dozens or even hundreds of options. It's basically impossible for an insurer to "overcharge" a customer, because there are always other options. Any customer who bothers to get a few quotes will generally find a lower price than what their current provider is charging. It's quite absurd to worry about nickle-and-dime price differences caused by price optimization. But from the point of view of the insurer, price optimization could mean eking out the tiny margin necessary to keep the company solvent. It could spell the difference between solvency and liquidation, which generally means lay-offs and unpaid claims for policyholders.

Tuesday, April 30, 2019

Critique of the Illinois Economic Policy Institute Report: Raising the Minimum Wage

The Illinois Economic Policy Institute put out this report titled Raising the Minimum Wage.  I mentioned this in a previous post but I thought it would be worth giving it a much more thorough treatment here. I think this report is being used by policy-makers in Illinois to justify the recent minimum wage hike. If so, someone needs to dissect the report, vet its various claims, and debunk the stuff it gets wrong. Honestly, I suspect that Illinois politicians don’t really bother with the requisite scholarship or policy analysis that they’d need to actually govern effectively. Maybe a few ranking members read the executive summary of the report, but most probably didn't even get that far. I’m pretty sure that the $15/hour minimum wage was passed based on political considerations, not a cost-benefit analysis. I seriously doubt that the people running my state did their due diligence here. When I contacted my representative in the Illinois House, she forwarded me to another member of her party who was spearheading this initiative, Will Guzzardi.  He was not responsive. He’s made some public statements that badly misrepresent what the literature on the minimum wages says. I’m curious if the ILEPI paper is one of his sources. Even if not, I want to make it a little bit harder for policy think tanks like the ILEPI to just say whatever they want. If they are going to make bad arguments in a public forum, I think someone should point out how bad they are. If they are making dishonest or misleading claims, they should be held to account and publicly embarrassed for it. Every state probably has institutes like the ILEPI who put out policy papers. It’s worth taking the time to read what they say and, if necessary, trying to debunk these papers.

The Executive Summary starts with the sentence: “Illinois should raise the minimum wage.” To their credit, they are upfront about their intent. It becomes clear as you read the document that this was their true starting point, and all the “evidence” was assembled to reach this conclusion. Then follows an irrelevant statement that 13 states have a higher minimum wage and that nine of those states have unemployment rates lower than Illinois. As I read this I braced myself for some really bad econometrics. The report did not disappoint (or should I say didn’t fail to disappoint?). It also mentions the irrelevant fact that a majority of voters support increasing the minimum wage. Okay, but maybe that’s a function of dishonest policy advocates misleading them? Maybe that’s due to widespread economic illiteracy, a problem made worse by extremely biased policy papers like this one. In policy analysis, saying that something is democratically popular is a throw-away argument. Nobody decides to be pro-X just because slightly over 50% of the population support X. It's an irrelevant piece of information, so why bring it up?

The second paragraph begins: “Raising the minimum wage boosts worker incomes while having little or no effect on employment.” This is a misleading summary of the research. I’ve written about that here. The ILEPI isn’t alone in making this claim, but they are mistaken.

The report then describes a staged roll-out, eventually getting to $15/hour by July 1, 2024, along with absurdly optimistic estimates of how much incomes would rise for Illinois workers.
From the last paragraph of the executive summary: “Illinois’ current minimum wage of $8.25 per hour fails to prevent workers from earning poverty-level wages.” If the intent is to help poor households, the minimum wage is an extremely poorly targeted policy for it. $8.25 an hour is a perfectly good starting wage for a first job. Very few minimum-wage workers stay at that wage for long. Very few of them are full-time workers in a single-earner household. A 2014 report by the CBO found that only 19% of the increased wages from a minimum wage hike would accrue to families below the poverty line, with 29% accruing to households above 3x the poverty line. 

First sentence of the introduction: “The minimum wage is intended to ensure that working-class individuals can maintain a decent standard of living.” Of course, intentions are not results. It goes on:

“Despite this acknowledgement that poverty-level wages foster reliance on social safety net programs, a full-time worker earning today’s state minimum wage rate of $8.25 per hour brings home just $17,160 in annual income. This is $3,620 below the federal poverty line for a family of three and $7,940 below the federal poverty line for a family of four.”

This is just completely irrelevant. There are very few full-time minimum wage workers, and most minimum wage workers are in households that are well above the poverty line. Also, like I’ve argued before, the notion that our social safety net programs are subsidizing the employers of low-wage workers is exactly backwards. Safety nets makes the option of not working more attractive, which means employers have to pay more to attract workers.

Figure 1 is a stunningly bad piece of econometric reasoning. It lists the states with a $10/hour or higher minimum wage and gives the overall unemployment rate. Most of the research on the minimum wage focuses on teenagers or restaurant workers, in other words groups where minimum wage workers are highly represented. Minimum wage workers only make up a tiny proportion of total workers (2.3% of workers, according to the BLS). Total unemployment, calculated across the whole population, severely dilutes the effect of the minimum wage, and careful economists have caught on to this problem and adjusted their methods. I don’t know why they even bothered with this chart. It is far below the standards of modern econometric studies on the effects of minimum wages.

There is a long discussion of Chicago’s minimum wage hike. I’m not familiar with the attempts to study that particular city’s minimum wage policy. They claim (citing a paper by one of the report’s authors) that “…the policy change is working.” If I familiarize myself with the literature on the Chicago episode, I'll write up another blog post on that.

The report notes that many minimum wage studies find small elasticities: “In their meta-analysis of 64 studies, Belman and Wolfson report that a 10 percent increase in the minimum wage is statistically associated with a small 0.2 and 0.6 percent drop in employment or hours.” A couple of reactions to this. Do they really think that a 100% increase in the minimum wage would only cause a 2% to 6% increase in unemployment to the relevant workers? Taking the low estimate: Would a 200% increase only cause a 4% increase in unemployment? That seems implausible, but as we’ll see below they actually do take these estimates and extrapolate them far beyond where they are appropriate. For another thing, the disemployment effects are much stronger when you measure not just employment (as in: Are you employed? Yes or No) but also measure hours worked. You get much larger elasticities that way, and in fact the loss in hours worked can be large enough that workers actually lose net income, despite their higher wages.

Then they turn their attention to Seattle, which I do know a little about: “However, another recent study by researchers at the University of California, Berkeley found that minimum wage increases in Seattle resulted in higher earnings for affected workers in food service but had no negative impact on their employment.” This is incredibly misleading. They fail to cite the two papers by Jardim et. al. which had a much more detailed dataset. The Jardim group had access to state unemployment insurance data which had “hours worked” in addition to earnings, which allowed them to compute the hourly wages for each worker, before and after the minimum wage hike. This allowed them to 1) accurately identify low-wage workers and 2) track "hours worked" over time at the individual worker level. They found huge disempoyment effects, but these showed up as lost hours worked and slowed growth of jobs in the low-wage sector. The Berkley group’s study was too crude to pick up these effects. In fact, the Jardim et. al. papers effectively replicated the Berkley group’s findings by only looking at restaurant workers (in other words, by ignoring some of the rich features of their dataset), which is strong evidence that all these “null result” papers are hobbled by inadequate datasets. When you have the data in its full detail, the disemployment effects show up quite clearly. Pardon me for saying so, but this shows very bad faith on the part of the authors of the ILEPI report. Clearly the results of the Jardim group discredit the conclusion the ILEPI would like to reach, so they fail to disclose it to their readers. (The Jardim et. al. papers were out when the ILEPI published this report.) This is part of the reason why we get so much bad policy.

The paper mentions more intense job search and reductions in turnover as ways of explaining the low disemployment effects found in (some of) the minimum wage literature. As I’ve written before, those are costs, not benefits. People who are trying to justify a higher minimum wage need to be upfront about this. A standard economic treatment of these issues treats them as costs, as part of the deadweight loss. (See the last image in this post and the surrounding discussion; the small triangle is the deadweight loss from foregone employment that would have happened at the natural market wage, and the pink rectangle is the potential deadweight loss from extra job search.)

There is then a discussion of who benefits (demographically speaking) and by how much. All of this is irrelevant if you don’t buy their assumptions about disemployment effects, but go ahead and read it.

I was perturbed by the discussion and tables under the heading Economic Impact: Minimum Wage Hikes Would Grow the Illinois Economy. “Drawing on the economic research, Figure 4 assumes that every 10 percent increase in the minimum wage causes a 1.1 percent increase in worker incomes and a 0.45 percent decrease in working hours. These “elasticities” are midpoints between the comprehensive analysis of dozens of minimum wage studies (Belman & Wolfson, 2014) and the more recent, and perhaps more relevant, evaluation of the Chicago minimum wage hikes (Manzo et al., 2018).” I criticized these assumptions and the resulting table, Figure 4, in a recent post.
Here is an example of a calculation in which someone really is treating the minimum wage like a perpetual motion machine. This study (IMO a terrible one, more on that in a later post) by the Illinois Economic Policy Institute attempts to calculate the effects of a minimum wage on various economic outcomes. See Figure 4 and the associated discussion in the text. They claim that a literature review turns up a result that a 10% increase in the minimum wage results in a 1.1 percent increase in worker incomes and a 0.45 percent decrease in hours-worked (presumably this comes from the various studies measuring the elasticity of demand for low-skilled workers). They apparently think that you can extrapolate those numbers to arbitrarily high increases in the minimum wage, because that's exactly what Figure 4 is doing. I want to say, "Okay, show me what the result will be for a $50/hour minimum wage. Or $1,000/hour for that matter." They get that a $10/hour minimum wage will result in a 1% reduction in working hours and a 2.3% increase in worker incomes (from a starting point of an $8.25/hour minimum). They get this by calculating the change in the minimum wage, 10/8.25-1 = 21.2%, and simply multiplying through by the numbers above. So 21.2%* (1.1%/10%) = 2.3% for the change in worker incomes. 21.2% * (-0.45%/10%) = -1% for the reduction in worker hours. They do exactly the same thing for the $15/hour minimum wage: 15/8.25-1 = 81.8%. So 81.8% * (1.1%/10%) = +9.0% for the change in income and 81.8% * (-0.45%/10%) = -3.7% for the reduction in employment. If the 1.1% and 0.45% can really be extrapolated to arbitrarily high minimum wages, then they have a perpetual motion machine. The increase in incomes keeps going up forever. If asked about a $30 or $50 minimum wage, the authors might demur. "Oh, of course you'd start to see bigger disemployment effects at that point." But why wouldn't they also see it at $13 and $15/hour? The $13 and $15 are minimum wages far large than what the 1.1% and 0.45% numbers are calculated from, so even extrapolating this far is dubious.
This is a general criticism I have of this report and of other advocates of minimum wages: Okay, so show me what happens with a $50 minimum wage. Or a $1,000 minimum wage. Are your equations and calculations telling me something sensible? If they are obviously missing something at these very large values, then isn’t it likely they’re missing something, even if it’s subtle, at lower values?

The report attempts to quantify “multipliers” using IMPLAN software, which it refers to as a “’gold standard’ in economic impact analysis.” I’m not familiar with the software, so someone who has done real scholarly economic research can chime in and tell me if theirs is an accurate description or legitimate use of the software. I find it highly dubious that some off-the-shelf software can accurately simulate a real economy after a policy change such as a minimum wage increase. I don’t know if this kind of thing is common in economic research, but I’m pretty sure it isn’t valid. Figure 5 shows the results of these simulations. Unsurprisingly, they find net benefits for the $10, $13, and $15 minimum wage. Here is where my general critique comes back in: Show me what happens when you plug in $30 or $50, or $1,000 for that matter. Is there still a “net economic benefit”, even though no reasonable person believes there would be one? To their credit, they disclose that there would be a large reduction in hours worked (again, assuming the IMPLAN computations are right): “The impact on employment would be a drop of about 220 million labor-hours in Illinois. However, despite the estimated drops in total hours of employment, the positive economic impact means the minimum wage hike would positively impact more workers than those who would be negatively impacted by it.” I am just incredulous at this line of argument, which I’ve seen elsewhere. Even when you get minimum wage advocates to admit to some kind of job loss, they dismiss them in light of the “net benefits” or otherwise assume it will just turn out alright for the people who lose their jobs. Maybe it’s the most vulnerable workers with the lowest skill-level who lose their jobs, and the benefits accrue to the better-off among the minimum wage workers? Indeed that would be a sensible a priori assumption, and that’s basically what the Jardim et. al. studies of the Seattle minimum wage hike found. Weighing these job losses against economic gains simulated in canned software, and siding with the simulated gains, is highly suspect.

The report claims: “As a result, more than 35,000 low-income workers in Illinois would be lifted out of poverty if the minimum wage was increased to $10 an hour. This would represent a 2 percent drop in the total number of people living in poverty across Illinois.” Again, we’re assuming that the increased wages aren’t offset by hours reductions or job cuts, which would plunge some of these workers into even worse poverty than what they’re now experiencing. See their summary of poverty reductions in Figure 6. Again, I’d like to see what this table looks like for very large increases in the minimum wage. If it tells us that there would be a large reduction in poverty at $20 or $25, it would make me even more skeptical of what it’s telling me about what’s happening at $10 and $13. Figure 7 in the same section attempts to quantify the impact on the Illinois State budget. Higher minimum wages, to the extent that they actually increase take-home pay, might increase income, sales, and property taxes. They’d also make people less dependent on SNAP and other transfer programs. Once again, the bigger the minimum wage hike, the more money Illinois saves! I hate to repeat myself, but let’s see them plug in $50 or some other absurdly high value. If they think they have a true perpetual motion machine, let them say so. If they don’t, let them explain what’s fundamentally different about “small” minimum wage hikes. (Scare quotes around “small” because we’re talking about nearly a doubling here.)

Obviously this report has a lot of problems, and I hope nobody is citing to argue for a higher minimum wage. For my readers in other states, check to see if your state has a counterpart to the Illinois Economic Policy Institute. There is some low-hanging fruit here in terms of policy advocacy. Many of these state-level think tanks are poor in resources and can’t afford to fund high-quality scholarship. They put out stuff like this to influence policy. Don’t let them succeed, not if they haven’t earned it. If they put out reports that are full of mistakes, material omissions, poor arguments, and motivated reasoning, call them on it If there is a local think tank that you are sympathetic with, do the same for them and help them make more convincing arguments. Most people in the policy analysis space probably think they can just uncritically cite a study (like the ILEPI piece) and be believed by their receptive audience. Don’t make it too easy for them. A lot of published research is just no good, and a little bit of critical reading can go a long way. People who put out stuff like this under their name should feel some hesitation or embarrassment. They need to do their scholarship with the feeling that, "I can't just say anything. It has to at least make sense. Otherwise that jerk Jubal Harshaw is going to jump on me." David Henderson set an excellent example of what I'm talking about here, here, and here. Maybe the authors of the CEA report were able to brush off Henderson's criticism with a, "Who cares what this Henderson guy thinks." But I think most academics, deep down, are honest and will feel nagged by the idea that they've said something wrong or easily critique-able. Putting out these critiques slightly changes the incentives in an Adam Smith Theory of Moral Sentiments sort of way, if not in a Wealth of Nations sort of way.