Wednesday, August 7, 2019

What It Means For a Worker To Be Exploited

The charge is often made that low wage workers are being "exploited" by their employers. What is meant by "exploited" isn't usually defined clearly, so let's explore some possibilities.

1) A low wage is exploitation per se. Even if the worker's productivity is in line with their wage.
2) The low wage isn't a necessary precondition of "exploitation." What's important is that the employee produces far more than their wages indicate. Their employers are earning a huge surplus on their labor, and declining to share it with their employees.

Casual conversation suggests people are using definition 1). But 2) seems like a more reasonable definition of "exploitation." It's possible that some people are so unproductive, for whatever reason, that they simply cannot produce more than $7.25 an hour for their employer, no matter how well managed or well capitalized their workplace is. Such a worker is not being exploited. In fact, there's nothing special or logical about the $7.25 figure, which (frankly) was basically pulled out of someone's ass just over a decade ago. (Minimum wages are set by politicians enacting whatever legislation they can get away with, not carefully fine-tuned by economists and statisticians who come up with the answer to a complex optimization problem.) Some employees might only be able to eke out a mere $3 of productivity per hour. An employer who puts this person in a job might actually be doing charity work, in the sense that they're losing $4.25/hour to employ them. Assuming such cases exist, does it make sense to talk about the worker "exploiting" the employer? (I suspect that some employers are consciously doing charity work for workers who otherwise have a hard time finding work, like people with criminal records or disabilities, and I'm certainly not criticizing them for doing something pro-social. Just pointing out that it's unreasonable to expect businesses to make a habit of losing money.)

Thomas Sowell has actually made a similar argument by pointing to highly paid superstar athletes. He points out that these people will tend to be hired by whatever team will earn the most from them, in terms of ticket sales, advertisements, merchandise, etc. A team in a city with a larger population will be able to pay more, because they more likely to sell out tickets for a sporting event and have more fans bidding up the prices of those tickets. A team that is already popular, with a large fan base, might be able to make more money on advertising. The highest-bidding team for a superstar's services might conceivably make $10 million more than the next highest bidder because of the athlete's effect on all these revenue streams. But they might be able to out-bid their second-best rival by offering only $1 million, thus keeping $9 million of that person's value-added. (Of course, the actual outcomes are risky and somewhat random, but businesses, including sports franchises, think in terms of expected revenues and profits.) One might say that highly paid athletes are exploited to a much higher degree than low-paid workers, assuming "exploitation" implies the intuitive definition, productivity in excess of pay. And this argument would tend to apply to other highly paid workers. The very best CEO for a company might earn the company tens or hundreds of millions over what the second-best candidate would earn, but they are likely to only capture a fraction of that in take-home pay. Nobody thinks of these people as "exploited".

Maybe someone will wish to argue that, actually, those low-paid workers do produce an enormous surplus for their employers. Thus they are being exploited according to definition 2). This theory bumps up against some uncomfortable realities. Who gets hit the hardest during an economic downturn in terms of layoffs and unemployment? Low-wage workers or highly paid workers? Empirically and (I think) matching with most people's intuition, it is the low-wage workers. But this makes no sense if these workers are earning a huge surplus for their employers. If Walmart is paying its employees $14.26 an hour (Walmart's average hourly wage) but earning, say, $30/hour, then each employee is a tiny gold mine. Likewise, why keep around the highly paid (some argue "overpaid") employees during hard times, when any inefficiencies in the system bite hardest? Also, take a look at profit margins by industry. Do sectors that hire a lot of low-wage workers have generally high or generally low profit margins? Consider also the very high rates of turnover (as in openings and closures) in the restaurant industry. Why is it so hard to stay in business if it's so easy to exploit low-skilled labor? Finally, supposing it were true that there were some kind of enormous profit to be reaped by employing low-skilled workers, shouldn't that attract more competition? If Walmart is only paying $14.26 for something that's actually worth $30, shouldn't someone else bid for that worker's services? Even assuming the bidding doesn't get the worker's pay up to $30/hour, shouldn't someone at the very least bid an extra dollar? Shouldn't another employer be willing to earn "only" $14.74 instead of $15.74 on each employee's labor? Markets are generally pretty good at not mispricing things, because this kind of bidding and counter-bidding is always happening. Nothing should stay badly over- or under-priced for long. If a worker is working at Walmart for $14.26/hour, it probably means nobody else was willing to pay them more for comparable work, and Walmart actually gave them their best option. It makes little sense to heap scorn on the employer who is doing the most to help that person.

I think we do ourselves a disservice by misstating the nature of the problem. The shitty reality is that some people just are not very productive. It could be an intellectual limitation, like they're just not very smart, or a social limitation, like the inability to correctly pick up on cues and instructions from their manager, or a behavioral issue, like chip-on-my-shoulder resentment towards authority and general insubordination. If these prosaic explanations of low wages make more sense than theories about power structures and "exploitation", we should face that fact with our eyes wide open. Otherwise we'll implement bad policy solutions that don't address the actual problem. Minimum wage legislation and other legislation that's meant to "protect the worker" can be counter-productive by making it even less profitable than it currently is to hire certain workers. Or even un-profitable, as in, "I can only hire this person at a loss." This isn't a plea to shed a tear for business owners, but rather to understand what business owners are likely to do in response to profit motives. If we are trying to incentivize pro-social behavior on the part of businesses, we need to set the incentives right. That first requires correctly specifying the problem.

If you do know how to coax tremendous productivity out of low-skilled workers, please go into business and hire them. You will bid up their wages, you will make a fortune for doing it, and you will become my personal hero. I dearly wish I knew the secret formula for getting higher productivity out of people, and moreover in a way that scales up to millions of people. Sadly I do not.

________________________________________________

It's also worth noting that only about 2-3% of the labor market earns the minimum wage or less, so the fraction of workers "protected" by this policy is small by any measure. Businesses apparently are willing to pay workers more than the statutory minimum in the overwhelming majority of cases. The reason, I suspect, is that labor markets are actually quite efficient and good at paying the appropriate wage for the worker's value added.

Sunday, August 4, 2019

The Minimum Wage: a Modest Proposal on Data Gathering

What follows will be a mere data gathering proposal. I'm not intrinsically arguing for or against a minimum wage with this post, just arguing for the data collection that will allow us to study the effects of the minimum wage. In a sentence: Every state should start tracking every worker's total hours. (Every state already tracks total earnings for the sake of workers compensation eligibility, among other things, so this is not much of a change.)

The Seattle minimum wage studies by the Jardim et. al. group were particularly well done and revealing, because they actually had the data to accurately measure wages and hours worked for basically everybody in the state of Washington. The only states that currently capture this data on "hours worked" are Washington, Oregon, Rhode Island, and Minnesota (according to Jacob Vigdor, who was kind enough to correspond with me by e-mail on this and other questions). Most states collect this data for the sake of tracking eligibility for workers comp, and most states just have a "total earnings" trigger for eligibility. Washington (and possibly these other states) also have a "total hours worked" trigger, so they have to capture this data to administer workers comp in compliance with state law. If every state were collecting this data, the empirical work on minimum wage would be a lot more accurate.

Knowing everyone's hours worked in addition to total earnings allows us to 1) figure out what each worker's average wage was (by simply dividing the one by the other) and 2) figure out if minimum wage increases cause workers' hours to be scaled back. Item 1) is important because most studies have used crude proxies for "minimum wage earners", effectively assuming that restaurant workers or teen employees are minimum wage workers. With "hours worked" we could accurately identify minimum wage (or near-minimum-wage) workers before and after the wage hike. Item 2) is important because employers may be more prone to scale back hours than to do outright layoffs. So the "disemployment" effect is due to loss of hours worked, not job losses. Indeed, this is what was found in Seattle, and low wage workers' hours were scaled back so much they actually lost out in terms of total wages.

My hesitation about writing this post came from reading Coyote Blog. The blog's author, is an employer and often writes about what a pain in the butt it is to gather data for various government reporting agencies. So I specifically asked him this question via e-mail, and he was kind enough to respond. He told me that almost all employers use a payroll provider anyway, who are almost certainly tracking the number of hours worked so as to accurately calculate employees' paychecks. It's possible that this data is being collected somewhere already, it's just not being reported to any central government for aggregating and doing studies. Collecting this might be as simple as flipping a switch already built into each payroll provider's code, or at the very most changing some code to make it similar to what's used in Washington. (He clarified that what's painful is when governments start asking for data that was never collected before, which the employer might not even know. Say, a demographic category, or something like "disability status", which an employer might not be legally allowed to even ask about. He's written extensively about this on his blog, so I'm not revealing any secret details from the e-mail.)

This proposal is agnostic about whether the minimum wage is a good idea or a bad idea. Minimum wage proponents should be saying, "Yes, that will give us the firepower we need to refute Seattle and put this to bed once and for all!" Minimum wage opponents should be saying, "Yes, that will allow us to replicate the findings of Seattle as more states and cities roll out minimum wage hikes!" Neutral parties can take a wait-and-see attitude. Economists should be celebrating the job security of having tons of extra data to analyze. "Yay! Now we can figure out new statistical tricks to make the effect get bigger/go away!"

To expand the proposal just slightly, we could also start tracking each employee's municipality, in case different cities have different minimum wages. The Seattle study started with all Washington workers, but it had to infer who did and didn't work in Seattle based on the employer's address. This didn't work for, say, McDonald's, because McDonald's has locations all over the state. Only businesses that were unique to Seattle could be located for the sake of the study. I think the Jardim et. al. papers handled this limitation adequately and their results are still valid, but some critics have latched on to this as a reason to dismiss the paper's results. So let's fix the problem going forward.

Weird Bullet-Biting by Minimum Wage Proponents

It's frustrating how the argument sometimes changes from "minimum wages have no effect on unemployment or job counts" to "those were bullshit jobs that shouldn't have existed anyway."

I'm thinking of a few recent examples from social media. Someone had posted a picture of the checkout counters of a store, where almost everything had been converted to self-checkout. This was from a state that had recently increased its minimum wage. The point was that it's easier to staff checkout counters with people if your store is free to set the wage. These are obviously low-skill, entry-level positions. Stores cannot afford to pay an arbitrarily high wage to these employees, so those jobs start to disappear when the minimum wage is raised high enough. Anyway, someone chimed in with a comment such as (paraphrasing): Those are demeaning jobs, and they should go away anyway. I guess that's fine if it was always your position. Someone could claim to be internally consistent if their position was that the minimum wage should be raised in order to kill off the "demeaning" jobs. (I find this to be an incredibly arrogant position and think it's insulting to low-wage earners, but I could imagine someone believing it). What makes it less believable is that these are often the very same people who argue that minimum wages have no impact on unemployment. They are inventing a post hoc rationalization when the bad consequences of their policy come to fruition (which they were warned about, and which they often explicitly denied would happen).

I hear this point expressed in different but similar language. When I have attempted to argue that a job might we worth doing at $5/hour but not at $7.25/hour (much less at the incredibly irresponsible $15/hour minimum some are proposing), sometimes minimum wage proponents implicitly acknowledge the point by saying "If you can't pay the minimum, you shouldn't be in business anyway." I've also heard, "You have to pay to play" from someone making the same point. I wrote about this here in my summary of Jonathan Meer's debate with Jamie Galbraith. Galbraith at one point bizarrely admits that some jobs will disappear, while maintaining adamantly that there will be no net loss in employment. (Galbraith's point: "Sure, you'll see some disruption. The whole point is to change the structure of the labor market.") This is weird. Someone is admitting that there are low-valued tasks out there in the world that might be worth doing. They are acknowledging that some entrepreneur could feasibly organize some (probably low-skilled) employees to do these tasks at a wage those employees would accept, a wage which is below the current (or proposed) legal minimum. But they are borderline gleeful about destroying such jobs.

For another related example, listen to this episode of Econtalk with Jacob Vigdor, in which he discusses Seattle's experience with a very large increase in the minimum wage.
And I'd say that there are not a whole lot of people who express optimism about the future of low-wage employment. I had a political operative from the Seattle Mayor's office come visit me a couple of years ago. This particular staffer from the Mayor was wondering if I would be willing to sort of go out in public and advocate for a higher minimum wage. And, I responded by saying, 'Look, that's not my job. I'm not an advocate. I'm a researcher.' And I mentioned to him that our research was actually showing that there were some potentially adverse impacts of Seattle's minimum wage. And he responded to me by saying, 'Well, in the long run, aren't these jobs going to go away anyway?' And so, this is coming from someone whose job description is to be a minimum wage advocate.
Emphasis mine. Just think about the contempt for low-wage workers that this reveals on the part of their political "advocates." I should acknowledge here that the politician in question isn't necessarily wishing these jobs to go away, as I've heard some people do. He's just observing that they probably are going away regardless and not feeling too bad about hastening their demise.

This doesn't make any sense, at least not if someone is insisting on both the "no net job losses" story and the "those bullshit jobs should go away anyway" story. Supposedly there are people who would be in business hiring people for $5/hour, if we would let them. (And, importantly, there are workers who would willingly accept that wage. The willing consent of the workers is often overlooked or dismissed as "acting under economic duress" by minimum wage advocates.) If the "no net job losses" story is true, that means there are also, simultaneously, people willing to pay (for example) $15/hour for those same employees. Employers pay the wage necessary to attract the labor they need, but only up to the point that it actually pays off to employ that person. In other words, employers basically pay employees for their productivity. Supposedly there are employees who can coax $15/hour out of any given employee, but would be reticent in a world where the minimum wage was still $5/hour. These employers would supposedly sit quietly and let someone else buy up all the labor at $5/hour, even though these more productive employers could easily get an additional $10/hour of productivity out of them. Supposedly they know the secret formula for coaxing productivity out of low-skilled workers but will only do so if we make them? (Because, I don't know, they don't like money?)

Maybe I'm tying myself into a knot trying to make sense out of this. Most people are just sleep-walking into their political beliefs; they don't really have a coherent viewpoint or a well-articulated picture of their policies playing out. Their "ideology" is a bunch of ad hoc responses to criticism thrown together in a slap-dash manner, with nobody checking to see if the bullet points contradict each other (or common sense or economic theory or empirical studies).

In my preferred story, there is no contradiction. The minimum wage proponents are wrong on both fronts. Raising the minimum wage causes net job losses. The best minimum wage studies to date confirm the Econ 101 story, and find that the effect is quite large. And those low-wage jobs, the ones that are already killed off or would be killed off with another minimum wage hike, are perfectly legitimate options. Third parties should not be in the business of judging the morality of someone else's labor contract. If you aren't actually doing the hard work of figuring out how to put low skilled workers to work and getting high productivity out of them (and I absolutely salute you if you are such a hero!), if you aren't actively bidding for those workers' labor because you have a better deal to offer them, then it's really not your business to judge someone else's arrangement. This is coming from someone who generally tries to stay out of the "who holds the moral high-ground" game, because it's such a quagmire. But since some people come to this argument with nothing but moral rectitude, I should point out that minimum wage opponents have a good reason for thinking they also hold the high ground. That's why "I hold the moral high ground" doesn't get us anywhere.

Sunday, July 28, 2019

Prescription Opioid Abuse Trends in the 1980s and 1990s

Most discussions of the so-called "opioid epidemic" constrain themselves to the 1999 to present period. This is mainly due to data limitations. 1999 is the first year that the CDC started coding deaths according to the ICD-10 system. The years 1978 to 1998 were coded under ICD-9 and are not directly comparable, so most tabulations start in 1999. And the SAMHSA reports, like this one, present time series of opioid abuse rates starting in 2002. I've shared this chart from the SAMHSA report in a few previous posts:

Plainly prescription opioid abuse is flat then declining over this period. (The SAMHSA survey refers to opioids with the dull term "pain relievers".) And, looking at the CDC data, it's clear that drug poisoning deaths are increasing over this period. But it makes little sense to restrict ourselves to this time frame. Attitudes about opioids and prescription practices supposedly started changing in the early 1980s, with doctors becoming more relaxed and more willing to prescribe. A very short letter published in the New England Journal of Medicine in 1980 supposedly kicked off the change in attitudes toward opioids. By starting in 2002 we would miss two decades of the relevant trend-line. There is another survey that tracks opioid use, Monitoring the Future, that goes back to the mid-1970s. But it only tracks 12th graders, who won't necessarily be representative of the older patients, people with chronic pain, limited mobility, and various infirmities, who are most likely to be affected by loose prescribing practices. Here it is, anyway:


Plausibly, this tells a story of opioid abuse rising in the 90s, from ~3% to ~9%. But then, in the 2000 to 2010 period, when opioid prescriptions tripled, the abuse rates are roughly flat, even declining. And abuse rates are trending down in the 1980s. One has to cherry pick carefully to make it look like prescription practices are correlated with opioid abuse rates. Also problematic: "Past 12 month use" patterns might be very different from, say, addiction rates or more frequent drug use rates. It could be that infrequent, casual drug use is increasing but not really causing a problem.

The easily accessible data, for the early 2000s to present, seems to debunk the standard narrative of the "opioid epidemic." My most popular post ever is a long take-down of this narrative. Also see this Cato paper, which I had the honor of helping to write. Opioid abuse rates are simply not correlated with opioid prescription rates. In fact, when restrictions on prescription opioids started to bite in 2010 to present, heroin overdoses started to skyrocket, followed promptly by fentanyl and other super-opioids. Some proponents of the standard narrative respond to this by speaking of "stocks" and "flows" of addicts. In this story, the 1980s and 1990s left us with a stock of opioid addicts. The increase in prescriptions in the early 2000s didn't much change the rate of opioid abuse, because we were already saturated. (I'm guessing here; I can't find an instance of someone coherently articulating this story.) Then, opioid restrictions starting around 2010 drove the existing stock of addicts to heroin and illicit fentanyl. Proponents of the standard narrative can still claim that, if it hadn't been for the relaxation in prescription standards starting some time in the 80s and 90s, we wouldn't have the current crisis. But this depends on speculating about what was happening during this time frame. Data is almost never forthcoming, so I've had to check for myself.

My previous attempt to reach further into the past is written up here. I excerpted a chart from Lies, Damned Lies and Drug War Statistics that shows lifetime use for "nontheraputic use of prescription drugs." Even here, the timeline only goes back to 1990, and "lifetime" use is problematic for the reasons described in the  paragraph above. Also, it includes drugs other than prescription opioids (mostly benzidiazepines, barbiturates, and ADHD medications).

Then I found these files. It's the National Household Survey on Drug Abuse going back to 1979. One can open each years' study (on the right side of the page), click on the link under "Datasets in this Study", then click on the Codebook.pdf link under Dataset Documentation, and there is a useful summary of that years' findings. Here's what 1979 to 1998 look like:

It's hard to say there's an obvious trend-line. I mean, clearly the 1998 point is higher than the point for 1979. Crudely running a regression yields a very slightly upward sloping trend-line (though the time trend is not statistically significant by the traditional p < 0.05 standard; I'm getting p = 0.60). But it just looks fishy. Note that the early surveys are only done once every three years, then start being done annually in 1990. Did opioid use really triple from 1979 to 1985, then plummet in 1988? Something about this screams "bad methodology", or at least "changing/non-comparable methodology." It seems like SAMHSA was just getting its shit together in the early days, and these data represent some kind of reporting bias rather than real trends.

Here is what happens in the 2002 to present era:

The trend in this chart matches the chart pulled from the SAMHSA report at the top of this post. "But, this chart says 3% of people used prescription opioids in the past month, at least for the flat part from 2002 to 2010. The chart at top says it's hovering around 2% for 12+ year olds. What's the difference?" This point initially confused me. The chart immediately above is "% of respondents." I think the SAMHSA report is taking this raw survey data and restating it to reflect the distribution of the American population. So if the survey over-samples young people (who use drugs at higher rates), the "% of respondents" will be high compared to actual rates of drug use in the population. I assume some smart people at SAMHSA thought of this and restated "% of respondents" to reflect "% of U.S. population." There must be significant differences in year-to-year sampling, because the downward trend from 2010 to 2014 is more intense than in the top chart. Here's a telling excerpt from the 1988 code book:
In 1979, respondents from rural areas were oversampled, and in 1985 and 1988, blacks and Hispanics were oversampled to increase the reliability of estimates of drug use of these important groups.
In this light, the three points from 1979, 1982, and 1985 make a lot more sense. Clearly the SAMHSA people think these populations differ in their rates of drug use and are changing their sample to collect the right data. But this makes "% of respondents" non-comparable from one year to the next. If someone has taken the 1979 to 2001 surveys and converted "% of respondents" to "% U.S. population", I haven't found it. (Maybe this is a useful project for some grad student. See the bottom of this post for ideas to explore this further.)

Notice another feature of this data, one which I've discussed previously: the survey changed from asking about "non-medical use of prescription opioids" to asking about "misuse" in 2015. (I have the change marked on the graph, with "misuse" years marked in blue.) I don't know why they did this. "Non-medical use" means basically recreational use. "Misuse" includes recreational use and medical use not intended by the physician. For example, someone takes more pills than recommended to treat their acute pain, because the recommended dose isn't cutting it. Or someone has left-over pills from a previous surgery and uses them for a sprained ankle. "Misuse" is a more inclusive measure than "non-medical use". It's interesting to note that the trend continues to fall after 2015 even though it's using a more inclusive definition.

I want to be fully transparent here and show you the full time series. I had initially just pulled the data up to 1998 and thought I had a good post worth sharing. But something interesting happens in 1999. In the 1979 to 1998 surveys, it asked about prescription opioid abuse using the somewhat obscure term "analgesics," while making clear that it is not asking about Tylenol or ibuprofen. This doesn't completely leave survey respondents in the dark if they don't know that word; it also asks specifically about a list of opioids (demerol, dilaudid, hydrocodone...). In contrast, the 1999 to present surveys ask about "pain relievers". If I took the numbers literally, prescription opioid abuse was ~1% in 1998, doubled to ~2% in 1999, and hit 3% by 2003, then flattened out for a decade The sudden jump to 2%, after hovering right around 1% for the prior decade or two, is almost surely an effect of the survey wording changing. I don't know exactly why it would have gone to 2% for a couple years before jumping up to 3%, rather than just jumping straight to 3% in one shot. I would think a change in survey language would cause a one-time jump. It's possible that use rates really are increasing during this period. Also, once again, the sample population may be changing, such that "% of responses" doesn't mean the same thing as "% of the U.S. population." So it's hard to say what's really happening. (Note that the figure from Lies, Damn Lies, and Drug War Statistics, which I shared here, also brackets off the 1998 to 2002 period, as if to point out to the reader there is something peculiar with those years.)



I think it's implausible that opioid abuse actually tripled on those short years, then flattened out. This doesn't match any version of the opioid abuse narrative that I'm aware of. Attitudes about opioids had already been changing for perhaps two decades. There were already moral panics against prescription opioids in 1997, to which Reason Magazine responded with this piece by Jacob Sullum. Pain patients in the 1990s were having trouble finding doctors who would treat them, and doctors who mercifully served chronic pain patients were facing criminal charges for prescribing "too many" opioids.

This is the great frustration I have with peddlers of the "opioid epidemic" narrative. They don't seem to have any kind of coherent timeline in mind. In fact, I once discussed this with someone who researches this stuff for a living, and we were trying to figure out which of several possible competing narratives they subscribe to. Are 1) normal pain patients just dropping dead from the normal use of their legitimate prescriptions? Are 2) normal patients turning into addicts, who then intentionally misuse their prescriptions? Are 3) normal patients not, in fact, dying at excessive rates from overdoses, but diversion of their pills to the black market driving an epidemic of addiction and overdoses? Or 4) do proponents of the "opioid epidemic" narrative not even have a coherent enough story to distinguish between the various competing causal chains? (The list above is by no means exhaustive, and does not contain my preferred version of the story.) These different stories make different predictions about the trendlines in the CDC overdose data and the SAMHSA drug use data, and they make specific predictions about how these data should overlap. If 1) is correct, you could see drug poisoning deaths from opioids increase without any evidence of increasing abuse or addiction rates, which is in fact what we see. (Unless we count the 1998 to 2002 tripling in "past month use" as a real trend, and I argued above that we shouldn't.) Story 2) requires seeing an increase in abuse rates somewhere along the timeline. Story 3) probably does, too, unless the story is that the population of users doesn't increase but they are all using more intensely. The problem is that journalists and politicians who tell this story never bother to nail themselves down. It's not really clear what they are claiming, so it's hard to dispute their claims. They just vaguely know that opioid prescriptions increased and that opioid-related drug poisonings increased subsequently. For a book-length version of this story that's high on anecdote and low on data, read Sam Quinones' Dreamland. It wraps together a nice story, but without actually overlaying the trends in opioid prescriptions, addiction or abuse rates, and death rates, it fails to actually say anything. It's a nice case study in how to not do policy analysis. That would require scrupulously specifying the causal mechanisms and showing that these comport with the data. (Most of the action in Dreamland is from the 1990s, not from the early 2000s when opioid prescriptions tripled, and not in the 2010s when heroin overdoses started skyrocketing. Is the 1990s when the "stock" of addicts was rising? I wish someone would clarify.)

Here is the full time series, 1979 to present.


I have it color coded to show the various changes to survey wording, explained above. It's only with hesitation that I share this, because I don't think the points from different years are measuring the same thing. But in the spirit of full transparency, here it is. If I were less honest, I might have only shared the piece that was missing from my previous posts, the 1979 to 1998 period. If I were a demagoguing drug warrior, I might emphasize the 1998 to 2003 transition as a "spike in opioid abuse" without disclosing the differences in surveys. What looks like a "tripling of opioid abuse rates" is really three intermediate data points (1999 to 2001) in between a low plateau and a high plateau. Data is never as clean as we'd like. Even in a modern, developed nation with reasonably good institutions, national vital statistics are garbage. I'm left with the dueling feelings that we should either 1) double the meager resources spent collecting and analyzing national vital statistics or 2) get completely out of the business so that we stop sparking these unnecessary moral panics. My preference is for 2), given the ease with which a spurious trend turns into a set of very bad policy prescriptions. Option 1) could in principle be done right, with an appropriately alert political class and with sufficiently diligent and self-critical journalist and sufficiently aware voters. Unfortunately for option 1), those important qualifiers are missing in the real world.

I am shocked at how hard it was to find any source for this data all compiled in one place, and yet how easy it was to actually get to it and cobble it together. Anyone could have spent about 15 minutes text searching the SAMHSA code books for the "analgesics - past month" (and later "pain relievers - past month") to pick out the relevant figure for each year the survey was done. Those data are problematic for the reasons explained above, but it's baffling that nobody even tried. The closest I ever found was the figure of "lifetime nontheraputing use of prescription drugs" from Lies, Damn Lies, and Drug War Statistics. What I've done in this post is hardly satisfactory. The raw underlying survey data is available online. (See this link, right side. Click the year you want, then click the link under "Datasets in this Study", and you'll see the survey data available in several formats.) There are a lot of columns (~2,000) to parse, the columns are poorly named, and the contents are written in code (like "1 = male, 2 = female" rather than stating the contents in plain English). But it's the kind of thing a grad student with a free summer could easily hack through. I'm surprised that nobody has thrown the resources into such a project. If they have, it's been very difficult to find. Feel free to correct me in the comments if you find a source where someone has done this.

__________________________________

Please allow me to totally geek out for a moment here. If someone wanted to take this data and convert "% of respondents" to "% of the population", it wouldn't be that hard. All you'd have to do is run a few regressions. The surveys contain various demographic variables, like age, gender, race, and martial status. The regression models would use these as variables as predictors and targets "past month use" as the dependent variable. Each year's survey could have its own regression model, which characterizes the "past month use" rates for that year. Then one can simply create a synthetic data set that represents the demographic distribution for each year (say, "0.2% of the population is white, male, unmarried, 16; 0.3% of the population is white, male, unmarried, and 17, ...") and get the regression's predicted drug use rates for each demographic, then weight them together for a total population use rate. Alternatively, if the goal is to remove the effect of changing demographics, you could use one year's distribution of demographics and apply each year's regression model to this data set. I keep saying "regression", but I'd be tempted to use a gbm or some other kind of tree-based model for this project. A process like this would make the survey data comparable across years. It should flatten out the 1979 to 1988 data points, or otherwise reveal a real trend if there is one. Anyway, it would correct for sampling differences between years, some of which seem to be deliberate attempts to capture under-sampled populations of past surveys.

Friday, July 26, 2019

Recent Goodwill Story

Recently the local branch of Goodwill got some bad press. The story is here. I heard it on NPR, which is what my alarm clock plays when it wakes me up in the morning.

Whenever I dig into the details of a popular news story or the Outrage of the Week, I find that the dominant narrative is wrong in important ways. This one was no different. The story is being reported as: Goodwill decided to lay off all its disabled employees. See the very first sentence of the State Journal Register story above.
A day after Land of Lincoln Goodwill Industries reversed a decision to lay off workers with disabilities because the state’s minimum wage is increasing, the organization’s president and CEO submitted her resignation.
The NPR story used similar language. My immediate reaction was, "WTF? There's no way that's correct." At work, I had just been through a corporate management training event, a day-long session on interviewing skills, which included a long discussion of which things are not legal to ask in a job interview or use as qualifying criteria for a job candidate. It is completely illegal to make hiring or firing decisions on the basis of someone's disability status. (There was even a video with an actor playing the clueless hiring manager asking an older lady "Are you disabled?", and the lady making an annoyed face.) You can state the physical requirements of the job and ask if the candidate can handle them, and presumably you can fire someone after their job performance makes clear that they can't handle a job. But the news story was making it sound like Goodwill identified all of its disabled employees, marked their personnel file with a big red "D", and announced it was going to fire them. That's not what happened.

Goodwill actually runs a special training program for the disabled, ex convicts, and other people who have trouble finding meaningful employment. See their own description of their program here. Or read about Jonny at the bottom of this document. (These are Goodwill sources, so they may be biased, but if you find a well argued piece that's critical of Goodwill feel free to share it.) These are not traditional employment, so they have an exemption from the minimum wage.
Section 14(c) of the FLSA allows employers to pay wages below the federal minimum to employees who have disabilities that directly affect their job performance. Employers are able to do this through a special minimum wage certificate obtained from the U.S. Department of Labor’s Wage and Hour Division.
Some commentators try to argue that Section 14(c) is just a "loophole" that is cynically used by employers to exploit disabled workers. But this is wrong. The sad truth is, having a disability (depending on the disability) makes you generally less productive, and thus less valuable to an employer. Many of these people would not find employment at all if all employers had to pay them the minimum wage. Section 14(c) was explicitly built into the Fair Labor Standards Act because even advocates of the (then new) minimum wage realized it would throw the least productive members of society out of work. See this (generally critical) paper:
Section 14(c) of the FLSA included an important exception to the innovative minimum wage for people with disabilities 21 that, at the time, did not alarm the legislature. It was based on definitions and classifications set forth in the National Industrial Recovery Act (NIRA) of 1933. Under NIRA, President Roosevelt defined a person with a disability as one "whose earning capacity is limited because of age, physical or mental handicap, or other infirmity." Section 14(c) stated:
The Administrator, to the extent necessary in order to prevent curtailment of opportunities for employment, shall by regulations or by orders provide for.. .(2) the employment of individuals whose earning capacity is impaired by age or physical or mental deficiency or injury, under special certificates issued by the Administrator, at such wages lower than the minimum wage . 
Citations omitted.

So even Roosevelt was conceding (quite explicitly) that certain conditions make workers less valuable and built in an escape hatch to spare them the disemployment effects of the minimum wage. I believe some critics of 14(c) think that these employees will all keep their jobs if we did away with it, they'd just make more money. That's a pretty implausible assumption.

(By the way, I hate this usage of the term "loophole." Section 14(c) is a feature of a law that's doing exactly what it's supposed to be doing, not some clever hack that wasn't intended by the authors.)

Back to Goodwill. They are running a program where disabled people self-identify in order to get job training and some experience (and, plausibly, a sense of purpose in a life that would otherwise be spent in unemployment). If they have disabled employees who got their job the normal way, going through the usual application process, these people would not have been targeted for layoffs. It's not like Goodwill grabbed everyone in a wheelchair or in crutches and ushered them out the door; they are running a charity and decided to be slightly less charitable along one dimension. Legally, Goodwill has to be agnostic about their regular employees' disability status, even if it's something obvious.

In a statement, Goodwill had mentioned rising minimum wages as a factor in their (now reversed) decision to lay off employees under their 14(c) program. Some people were quick to criticize this rationale, because 14(c) explicitly allows them to pay less than the minimum wage. But Goodwill is right to be concerned about minimum wages, because there is a lot of political activism aimed at ending this exemption to the minimum wage. In fact, the United States House just recently passed a bill that would 1) increase the minimum wage to $15/hour and 2) removes the exemptions available to some workers. (The Reason piece doesn't mention Section 14c, but this Reuters piece makes clear that that's what the bill is targeting.) This likely won't pass the Senate, so probably won't become law. But Goodwill is surely following these efforts and trying to get ahead of them. If they suddenly have to adopt all of their job trainees as full employees and pay them $15/hour, that's likely to be a massive financial hit, possibly a fatal one. I hope all charities are as scrupulous about managing their finances as Land of Lincoln Goodwill. Some clueless commentators also pointed out that Illinois' minimum wage hasn't even started to increase yet. The $15/hour minimum will be phased in over the next several years, but the first increase hasn't hit yet. This criticism makes no sense. Businesses and (presumably) charitable organizations do long-term planning. They look ahead to manage their expenses. If they know that a minimum wage increase is coming and will soon increase their labor costs, they will start responding to it now with layoffs and other forms of cost curtailment.

(This is actually a major criticism of the minimum wage literature. Many studies find "no effect" on employment, but any effect is likely to be understated because employers are anticipating these kinds of changes, even before the law gets passed. They are likely to have already taken steps to mitigate the impact. It's not like they're in a binary state that's one way before the law passes and the other way after. They make probabilistic assumptions about what their future costs will look like.)

The public backlash is really unfortunate, and so are the efforts to end Section 14(c). Organizations running similar services for the marginally employed now have this hanging over their heads. They know that they can't walk back a program if it starts to become a political and financial liability. Anyone who is currently thinking about beginning or expanding such an operation is likely to have second thoughts about it now.
_____________________________

Maybe it's just Goodwill propaganda and I'm falling for it, but here is their story about Jonny:
But getting to Goodwill wasn’t easy. Johnny was born in the late 1970s with a rare trisomy chromosome imbalance, which limits his speech and cognitive abilities, in addition to other developmental and physical disabilities. When he was 10 years old, he was assaulted by an adult caregiver and became scared, withdrawn and rebellious. For many years, he wouldn’t go out in public or speak to people other than his father — the only person he trusted. But his father didn’t give up. In fact, he became a passionate advocate for his son.
Over the years, Butch left no stone unturned in seeking help for Johnny, and he even moved to Dallas, OR, where his son could live in a facility that he’d heard was “the best.” But Johnny wasn’t receptive to the help of the facility staff and wouldn’t talk to anyone. When the Butch first learned of the programs at Goodwill Industries of the Columbia Willamette (Portland, OR), he was hesitant, but gave them a try.
Johnny enrolled in the Goodwill’s Community Integration Project II, which provides employment and vocational training to people with multiple and/or severe disabilities under a special minimum wage certificate. The Goodwill’s staff recalls that when Johnny first entered the program, he was crying and shaking. But through training, he learned basic vocational skills and appropriate workplace behaviors. 
“Johnny has transformed from a frightened and profoundly insecure person into a confident and integrated young man,” says Michael Miller, the agency’s president and CEO. “His success today is a product of his father’s devotion, coupled with Goodwill’s intervention.”
Maybe Jonny is a cherry-picked example of the most sympathetic individual Goodwill could find, and I'm naively falling for their trick. But there are certainly people like Jonny who wouldn't be able to find meaningful employment at the current minimum wage (much less the absurdly irresponsible $15/hour some activists are peddling). I think about the pan handlers I see downtown on my lunch break. Some of them have obvious disabilities. It's hard to imagine any employer taking a risk on these people knowing they'd have to pay $15/hour. It's unlikely that most of these people could add that much productivity to the employers bottom line.

I've said quite a lot in this post about worker productivity, the value of an employee to the employer. I hope no reader mistakes this for the value of the person. It's not a statement about a person's moral worth or the value they bring to their friends or family. My very young children are incredibly valuable to me, but of no value to any employer. In fact, they probably would impose negative returns on any employer trying to coax meaningful work out of them, given the amount of instruction, monitoring and double-checking required to get a task done. Your value to an employer is not the same as your moral worth (however you might measure the second thing). Think what a non-sequitur it would be for a parent to drop off their teenage son to work at McDonalds and then get indignant that their child was "Worth infinitely more than $7.25 an hour!" ("Um, perhaps you are confused about the nature of this transaction. I'm not trying to buy your son from you, ma'am. I can only afford to pay him what he adds to this store's revenues, at most. Which, unfortunately, is not that much.")  This seems like an obvious point, but I've seen this mistake enough times that I wanted to preempt it. People need to drop this idea that your inherent moral worth as a human being imposes a duty on an employer, the duty to pay you some minimum amount for your labor. No, that depends entirely on what you add to the employer's bottom line. It's a morally neutral concept. By moralizing this concept, some misguided activists are saddling us with bad policy and casting inherently employable job-seekers out of work. To expect our value judgments to be fully reflected in market prices is just crass materialism.

Wednesday, July 17, 2019

Sneers About “The Koch Brothers” and "Koch Money"

It’s disheartening that name-calling is sometimes accepted as a serious argument in modern political discourse. There are plenty of examples of this behavior, but I have in mind the sneer that some commentator or some piece of scholarship is “Koch funded.” Sometimes it is sufficient to merely insinuate that there is a tenuous connection to the Koch brothers. For example, a scholar who once published something with Cato but is now working for some other outlet, perhaps even speaking his own mind, not on behalf of any institution, can be permanently slapped with the “Koch money” sneer. Of course, this sneer isn't specific to the Koch family; there are plenty of morons droning on about "Soros money" instead of engaging meaningfully with the arguments.

This behavior is so infantile I’m tempted to just not react to it, just as I would ignore a tantrum-throwing child. But then again it happens often enough that it’s worth responding. I recently saw Michael Cannon at Cato on a C-SPAN event. He was discussing health policy. A viewer called in just to say that Cannon shouldn’t be listened to because he’s associated with Cato, and that the Cato voice shouldn’t even get a hearing in our political discourse. Of course he babbled something about Koch money. The insinuation is always that these commentators are being paid by the Kochs to distribute their message, thus rendering them unreliable as sources of information. I want to explain just how utterly wrong this is.

I recently had my name on a paper published at Cato, something for which I am very proud. It was a short paper on the so-called opioid epidemic, basically explaining why the standard narrative is wrong and the policy implications are pretty much the opposite of what some careless commentators have inferred. I have been writing about this since early 2016. I have numerous blog posts explaining why I’m skeptical of the standard story. I have done a deep dive on the CDC’s mortality data, and on the pages of this blog I have posted some novel (novel as far as I can tell) pieces of analysis on that data.  I’ve been happily giving it away for free. I began an e-mail penpalship with the lead author of my Cato paper in early 2016. He asked me two years later to help write a paper with him. I jumped at the chance. Not at all because I was expecting to earn some kind of royalty for having written a paper. (I wasn’t expecting any such compensation, and anyway didn’t receive anything and didn’t dream of asking.) That never entered my mind. I got a chance to work with one of my personal heroes and earned a tiny bit of name recognition in libertarian circles.

Here is what didn’t happen. I did not get an e-mail from the Koch Brothers saying, “We need a paper defending proposition X. We will compensate you for writing said paper, as long as it toes the line.” I did not get any e-mails from Cato’s donors dictating the content of the paper or any other such interference. My guess is that this almost never happens. Most academics and commentators in the think tank space come to their interests and policy positions long before they ever find steady employment doing it. Alex Nowrasteh didn’t suddenly become pro-immigration because the Koch Brothers paid him off. Jeff Miron and Jeff Singer didn’t become anti drug prohibition because Cato cut them a check. Michael Cannon didn’t become an advocate for free-market health policies because he was bought out. These people came to their interests and policy positions and ideologies first. Of course these people are going to end up working for something like The Cato Institute. The best and brightest minds, the people with the deepest dedication to libertarian principles and the sincerest interest in policy wonkery, are going to pair up with institutions with the resources and connections that allow them to do the best work. It is simply not the case that Cato picks bland vanilla academics and pays them off to write policy papers.  The notion that these people are somehow tainted by their connection to funding is silly.

Suppose that someone’s work really is compromised by its underlying funding. I’m not saying this never happens. For example, studies published by pharmaceutical companies have a clear bias in favor of those companies’ medicines. (There is a long exposition on this topic in Medical Nihilism by Jacob Stegenga, an excellent book btw.) It’s not crazy on its face that this could happen elsewhere. I recall Michael Chertoff defending the use of body scanners on Fox News. It’s conceivable that he’s just a very principled defender of national security, but the fact that his lobbying firm represents the manufacturers of those scanners represents a clear conflict of interest. Even well-meaning people can self-deceive with a bias in favor of their own financial interests. You know what you can do about this problem? You can check their work. You see, Cato doesn’t just put out a paper outlining its conclusions and say, “We had some smart people look at some data and do some analysis, so take our word for it. This is the answer! We're the experts!” No. They publish policy whitepapers that outline and explain their arguments, provide citations defending their various claims, and generally attempt to lead a neutral outsider to the conclusions. If you know how to read and aren’t paralyzed with intellectual laziness, you can read, understand, and critique their arguments. You can point out that “this citation is irrelevant” or “this data is incorrect, and anyway doesn’t distinguish the Cato conclusion from the main alternatives” or “this argument is a non-sequitur.” Forget “follow the money”. Try “follow the argument.”

Let’s take this one concession further. Suppose you really do identify someone whose work was definitely compromised by their funding source. Maybe an e-mail gets leaked that exposes the funders putting pressure on a scholar to make a misleading argument, and the scholar caved and changed his paper because of it. Does this permanently impugn the scholar? Or the institution? I say “No.” It’s usually considered a logical fallacy to impugn an argument because of its source. It’s called an ad hominem, and anyone who has spent five minutes reading internet message boards and comments sections knows you’re not supposed to do it. Besides, “check their work” still applies here. You can uncover the bad argument just by reading the paper. Someone with a truly atrocious record of untruthfulness might reasonably be written off. But if public discourse has any kind of future, we’re going to want to avoid situations we permanently write off sources of contrary information or refuse to listen to someone’s argument. If a single dime of inappropriate funding is thought to taint someone's scholarship or integrity, I think that locks us into an impasse where we all just ignore each other's arguments and nobody ever changes their mind. If you're a skeptical-but-progressive-leaning voter or policy wonk seeking contrary information on, say, how we should run our public schools, you're going to find the highest quality evidence at some libertarian or conservative think-tank. That's naturally where the most convincing counter-arguments are being crafted and published. If you reflexively count them all out because they have a deep pocketed donor, you're going to lead a dull intellectual existence. 

Highly qualified scholars are expensive. Cato’s scholars tend to be doctors, lawyers, and economists, who can all make a lot more money working in the private sector than they can earn in the policy analysis space. (So writes this accredited actuary.) Cato doesn’t have the money to just buy these people up and keep them on staff as full-time employees. These scholars do the work because they love it and they feel like they’re fighting for a good cause. That’s how they get the Director of Undergraduate Studies at Harvard and a practicing surgeon from Arizona to do scholarship for them. The notion that they’d be able to buy these people’s integrity and compel them to make bad arguments is pretty absurd. If these individuals devoted their time and energy to professional pursuits rather than distracting themselves with Cato projects, they'd be able to make a lot more money.

Maybe this post was a waste of time. People who flippantly make ad hominem arguments generally aren't reachable. Or maybe not. I wanted to explain how a piece of "Koch funded" research feels from the inside. The nefarious influence of money just isn't there. 

Sunday, June 16, 2019

Integrated All-Cause Mortality

Don't pay for fancy actuarial tables anymore. Here it is for free:



Enjoy! I will try to keep this updated annually.