Saturday, April 21, 2018

From Locked In by John Pfaff


This is the intro for chapter 7 of this excellent book:
The emphasis current reform efforts place on reducing punishments for people convicted of low-level nonviolent crimes is understandable, but it should be clear by now that the impact will be limited. Any significant reduction in the US prison population is going to require states and counties to rethink how they punish people convicted of violent crimes, where “rethink” means “think about how to punish less.” 
A simple example makes this clear. Assume that in 2013 we released half of all people convicted of property and public order crimes, 100 percent of those in for drug possession, and 75 percent of those in for drug trafficking. Our prison population would have dropped from 1.3 million to 950,000. That’s no minor decline, but this sort of politically ambitious approach only gets us back to where we were in about 1994, and 950,000 prisoners is still more than three times the prison population we had when the boom began. Or consider that there are almost as many people in prison today just for murder and manslaughter as the total state prison population in 1974: about 188,000 for murder or manslaughter today, versus a total of 196,000 prisoners overall in 1974. If we are serious about wanting to scale back incarceration, we need to start cutting back on locking up people for violent crimes.
Kind of sobering. And it knocks the wind out of certain kinds of criminal justice reforms. Pfaff is clear that he personally supports many of those kinds of reforms. And anything  reform that releases thousands, tens of thousands, or hundreds of thousands of harmless prisoners is certainly a good thing. But we should be honest about how little such reforms would reduce "mass incarceration". Geez, look at me hedging and back-peddling after presenting the raw numbers here. Half the book is like this. "This reform would only release a few thousand prisoners...that's not to say we shouldn't do it! By all means, let's do it. It just won't solve our problem."

I think there would be knock-on effects from legalizing drugs (in particular, legalizing the drug markets) that would affect other categories of crime. But we'd still have a lot of violent offenders who are violent in more conventional ways, and we'd have to make an honest decision about how to punish them.

Tuesday, April 10, 2018

Made-up Social Problems

While writing this post about income inequality, I think I hit on a useful distinction. There are different kinds of social problems. Some are plainly real. An identifiable party is harmed, there is an identifiable malefactor, and the causal chain between the injuring party’s action and the injured parties harm is clear. Crime. Pollution. Mental illness (though in this case there probably is no malefactor). Unemployment (ditto).

Some social problems are basically made-up. Fake. Bullshit. They are structural features of society that exist in some aggregate sense and that some people find distasteful. But there is no malicious actor, no identifiable injury, no clear causal chain between on person’s actions and another’s harm. Income inequality is a good example. If you look at how people earn their income in an economy, you see a bunch of pairwise interactions between individual decision-makers. “Should I hire this person for such-and-such a wage? Yes!” or “Should I take this job for such-and-such a salary? Yes!” The hiring only happens if both parties agree to the transaction. Presumably the employer wouldn’t hire the person if his productivity didn’t justify his salary, and presumably the employee would not accept the offer if he could get a better deal elsewhere. We know that both parties benefit (at least on expectation, ex ante) because the transaction occurs. For all the talk about workers being exploited by their employers, those workers are free to go work elsewhere. If they don’t, it means their current “exploiter” is offering them a better deal than anyone else at the moment. (All things considered; of course there are non-wage perks or conditions that might make a lower-paying job more attractive than a higher-paying one.) If the employer is paying a surcharge to hire the worker (a surcharge above the worker's next best option), and the worker agrees to the terms, then it's very strange to describe this transaction as exploitation

“Trade deficits” are another bullshit social problem. It’s not even a useful concept, even as a summary term. We don’t have a “trade deficit” with China. We have billions of pairwise trades with China. Each of these trades is beneficial to both parties to the trade, otherwise (once again) the trade would not happen. Because of some stupid accounting tricks, some of these exchanges count money flowing in one direction and not the other. If I buy a $10 toy from a Chinese manufacturer, and the manufacturer turns around and invests $10 in an American company, that counts as $10 toward the “trade deficit” even though everybody benefited and everybody got what they wanted. It gets dumber. If a Chinese person buys my home for $500k and keeps it as a rental property, the trade deficit grows by $500k. But if the same person buys a manufactured home for $500k which gets shipped to China, the trade deficit doesn’t grow. In both cases a Chinese citizen bought something worth $500k from an American, but for some arbitrary reason one gets counted as an “export” and the other doesn’t. President Trump keeps harping on the multi-hundred-billion-dollar “trade deficit” with china. But it’s all noise to me, because the concept itself is completely meaningless. There is no injured party here. There’s no aggregate harm, no “emergent property” that exists on the aggregate scale that doesn’t exist in the individual trades. It’s a “problem” that was conjured out of thin air by an accounting trick (perhaps an unintentional accounting confusion?). Framing this as a “deficit” is backwards. If we’re getting a higher aggregate value of stuff shipped to us from China than the aggregate value of stuff we’re shipping to them, I’d call that a "surplus" if anything. But I probably don't want to call it that, either. A more helpful framing is that everyone is basically getting what they want because each individual trade is voluntary.

“Gentrification” seems like another one. If a bunch of upper-class people are moving into your neighborhood and you own your home, good news! You can sell your home for a higher value than what you bought it for and move out. If you rent and the cost of rent is going up, I guess that’s bad for you but it’s good for the person who moves into your apartment. If you don’t own your apartment, you hardly have a legitimate grievance against future renters who outbid you for it. That future renter has just as much claim to that apartment as you do. (Greater claim, actually, if they can manage to outbid you.) You can heap scorn upon the landlord for raising rent on his tenants, but the landlord is really just an intermediary in the market for housing. He’s responding to a market force (the greater demand for housing in an area), not creating it. He’s not simply “deciding” to raise the price of rent. Even if we can imagine a few hold-out "altruistic" landlords who don't raise their rents, they won't hold down prices for long. Their properties will be purchased by landlords who are savvier about market values and profit margins, and the "altruistic" landlords will find it very hard to resist their offers to buy them out. Rising rents are the result of market forces, not individual decisions. It’s not always clear that this process is bad, either. Can’t renters usually find similarly attractive housing elsewhere? (When I was in grad school I knew plenty of people who moved each year to a new rental property, though granted there were lots of people who stayed put for their entire grad school tenure.) Some people see “gentrification” as an insidious force: rich people are kicking poor people out of their homes (or something). Really, it’s just the economic process of stuff being shifted to its highest valued use and away from a lower-valued use. Adjusting to a changing world can be rough, even wrenching. But there’s no “malefactor” and no “externality” in any meaningful sense.

I can think of other examples. Stories about declining cities or communities always strike me this way. Does Detroit feel sad that it's "dying" or declining in population? No, of course not. It's conceivable that every single resident of Detroit (or any other declining community) could be made better off by moving to another city. There is no entity, no communal hive-mind called "Detroit", who would feel any pain. Some individuals might feel harmed. Property values decline because nobody wants to live there, there is less sense of community because all the people are leaving, there is less to do because the population isn't big enough to support little niche stores (or any stores at all perhaps). Still, I'm uncomfortable classifying this phenomenon as a "social problem" in the same sense that crime and pollution are social problems, because "problem" implies that there is a solution. (Which in this case means, what, enticing people to stay rather than leave? Thwarting the plans of individuals who see broader horizons elsewhere? All to preserve the comfortable, settled lives of people who prefer not to see anything change? Such a policy would create real problems in order to solve a fake one.)

This isn't an exhaustive list. I'm sure other examples will occur to me later. There are a lot of cases where people bemoan that "The world is this way, and I wish it were this other way." But not all of these are instances of real problems. Many of these "problems" are just instances of people wanting to re-engineer the world to meet their aesthetic preferences.

Sunday, April 8, 2018

Private Solution to the Education Credentials Arms Race?

Bryan Caplan's new book The Case Against Education argues for educational austerity. We're trapped in a credentials arms race, so let's at least stop subsidizing people to get better credentials than their competitors. If you buy Caplan's argument that education is mostly signalling (as I do), then austerity makes sense. But fat chance we'll convince a large enough coalition of voters to make any kind of difference. Just how many libertarian, wonkish, numerate, cautious thinkers are there in the world? The overlap of that Venn diagram gets smaller with each criterion we add.

Geoffrey Miller and Tucker Max might have an answer that does not require large change in public policy. In their book What Women Want (aka Mate, the title it's sold under on Audible where I purchased it), they offer a provocative recommendation to young men who are college bound. Wait a few years, they say, and go when you are 24 or so. Their rationale is maturity. The 18-year-old brain (particularly the 18-year-old male brain) is just not optimally suited for long hours of study. Geoffrey Miller is an evolutionary psychologist. I presume there is some scientific merit to his suggestion. Having once been an 18-22 year old male, it strikes me as true that some important maturation happens during this time. (Stop snickering. I said maturation!) His book argues that instead of going to college young men should find employment, travel, indulge hobbies, etc.  Here is where it gets provocative. In addition to being more mature and better able to cope with a college curriculum, young men will be more attractive to their female classmates.

An 18-year-old male entering college has basically zero social status. Maybe he's done something interesting, or maybe he's a hotshot sports star. But more likely he has nothing interesting to say and nothing to offer the young women he meets. If he's gone off on his own and lived a life for a few years, that changes. He know something. He's managed to survive at work for at least a short career. He has colleagues and acquaintances. He can legally purchase alcohol. Maybe he's even traveled a little and seen parts of the world that other young people are curious about. A short break between high school and college won't exactly make him the Most Interesting Man in the World, but it will give him a leg up over his useless 18-year-old self.

For a lot of young men, this is probably bad advice. I for one wouldn't have found anything better to do with myself. College was the best choice for me. And I've seen the "screw college, I'll go my own way" approach backfire badly. But for a large number of young men who only go to college because they're "supposed to", or because it's free to them, this consideration might dissuade them from making a bad investment (and thus avoid forcing society to underwrite that bad investment). Better yet, many of these young men might "find themselves" in the real world and forego the college experience altogether. It might nudge society to a better equilibrium, where people don't feel like they need a college degree just to compete and employers don't demand a college degree just because every candidate already has one. If enough young people find Miller's suggestion persuasive, there could be a big shift toward that equilibrium without a major change in public policy.

All this feels a little bit creepy. Not the part about telling young men how to make themselves more attractive to women. The book does this in a totally non-creepy way. Despite the title (the original and the alternate), it is not a "pick-up guide" (something I personally would have zero use for). It's more like a self-improvement/self-help book. It doesn't say, "Talk this way and women will like you; say these magic words and women will give you consent." The book is very blunt with the reader that they will have to do some serious work to make themselves attractive to the opposite sex. Get in better shape, because that actually makes you healthier and more worthy as a mate. Clean your home, especially your kitchen and bathrooms, because this actually gives visitors a more pleasant experience. Get a well-fitted wardrobe and groom yourself because it actually shows people that you give a shit. Work hard at your career because success in that realm actually proves that you can navigate the social world. Take up a serious hobby because it shows you have real self-discipline. The book frequently uses the phrases "social proof" and "material proof", referring to observable signals that you are a competent person and have something to offer as a companion and potential father. Having high status at work or with your group of friends (if they are quality friends!) gives you social proof. It's not that women are petty socialites trying to climb some meaningless social hierarchy. They just want some kind of proof that you're not a creep. Having a decent job and income is "material proof". Women aren't just looking for a sugar daddy; the book explicitly argues against "the more wealth you have the more attractive you will be to women" because it's plainly not true. They just want some kind of proof that you can navigate a career and provide for a family. Eighteen-year-olds entering college typically have none of these social or material proofs, but someone who has a short career already behind him has some.

The book takes an evolutionary psyche approach to the dating world. (Anyone who has read The Selfish Gene or The Blank Slate will see familiar themes; people who have not read these books or something similar might be squeamish about the following paragraph.) Women want proof of the above-mentioned traits because they want healthy children who will grow up to have similar traits. In the evolutionary psyche story, if a woman failed to do this she'd give birth to losers who would fail to carry on the gene line. If she's choosy about mates and picks only men who convincingly demonstrate desirable traits (importantly: in a way that is hard to fake), she will have high-quality children who will carry on her genes. It might be obvious that "You should clean your bathroom and the place where you eat so your date doesn't get grossed out" or "You should be in reasonably good physical shape to attract the opposite sex", but some young men probably need to hear it. Hell, some grown-ass men with families probably need to hear it. Even some very successful high-functioning adults probably need a gentle reminder now and then. It's not creepy to give these young men advice that, if actually taken, would truly make the world a little better. (It wouldn't just raise their relative status in an arms-race sense. A world with one cleaner bathroom is a slightly nicer world indeed!)

What feels creepy is enticing young men to solve a bad signaling equilibrium by promising them the attention of young women. Ultimately I'm okay with it. For many young men "delay higher education until you're mature enough to appreciate and benefit from it" is legitimately good advice.

I have some lingering doubts. If you take a break from your working career at ages 24-28, as opposed to 18-22, you are taking yourself out of the labor market at a point when you have more experience and your income is higher. This likely lowers your lifetime income, unless maturation causes you to really get a lot more out of college. If Bryan Caplan's signalling story is true then this is unlikely, unless maturation means you are simply more likely to finish college. Another point: most people don't go to college anyway. Sixty percent of working adults don't hold a college degree. (I think most people in my personal bubble would be surprised by this figure. Take a moment to soak it in, and contrast it with the fraction you'd get by surveying people you personally know and interact with.) So most people are already taking Miller's advice, just without the "go to college later" part. Spreading this advice more widely might encourage young men currently in the labor force to instead go to college, perhaps more so than it does to encourage young college-bound men to instead join the labor force. The net effect on college enrollment/completion is unclear. Still, my guess is that most young men are not even remote candidates for college and never will be. The advice to "Get a job, grow worldly, enter college when you are more interesting and mature" probably only appeals to people who would enter college in the first place. I'm sharing because I thought Miller's "delay college" advice was provocative and probably appropriate for a lot of young people currently considering college. If in a few years my own children are not absolute shoe-ins for college (and on sheer base-rates most kids aren't), I'll gladly share this thought with them.

Saturday, April 7, 2018

Income Inequality: Who is the Least Cost Avoider?

A general principle in law and economics is that you want the “least-cost avoider” for any particular problem to be responsible for solving the problem. (There is a discussion of this topic starting on page 269 of David Friedman’s excellent volume Hidden Order.) The problem being solved here is usually some kind of “externality”, a cost imposed by one party on another.

For example, suppose I’m operating a noisy factory and there are nearby residents who are bothered by the noise. Maybe it would cost me $10 million to shut down my factory, but only $1 million (total) for the residents to sound-proof their homes, automobiles, and perhaps wear earplugs when they are outdoors. For society as a whole to function well, we don’t want to destroy $10 million in value when we can instead pay only $1 million. The noise-abatement should be done by the residents. They are the least-cost avoiders. Even if a judge rules that I am responsible for the liability, I can offer to pay the cost of sound-proofing the local residences.

Or perhaps it’s reversed: maybe I can sound-proof my factory for only $500k. I that case I am the least-cost avoider. Again, a judge could get the ruling “wrong” and tell the residents that they have to just lump it. The residents don’t have to sit there and take it, nor do they have to shell out the $1 million. They can offer to pay me $500k if I’ll agree to install sound-proofing. There are other solutions. Maybe the residents can simply move away, or they can just deal with the noise. There are also potential problems. Maybe people move close to noisy factories anticipating that they’ll have an opportunity to sue. We don't want to set up rules that encourage "lawsuit entrepreneurs." These are all things to consider when adjudicating externality problems, whether it’s a judge or the affected parties negotiating with each other, or perhaps a legislature setting rules and defaults. The point is you ultimately want the party who can solve the problem at the lowest cost to be responsible for implementing its lowest-cost solution.

 Enter income inequality. I’m not even sure what kind of social problem this is. It’s not really an “externality.” There isn’t a harm done by one party to another. It’s just a structural feature of society that some people find distasteful. A high-salary top executive is paid by an employer who values that executive’s contribution more than the salary, otherwise the employer wouldn’t hire that person. Both parties agree to the transaction, so we can presume both parties benefit from it. Both parties are better off for having each other. That is a net win for society. Third parties can take umbrage at these salaries for being "too high," but this is no more an externality than the moral revulsion and disgust felt by moralizing puritans (say, anti-gay or anti-pornography agitators). Sorry, but those very high incomes are easily justified by the underlying economics; they aren't some kind of mistake or "market failure" (a term that is increasingly used to mean "a feature of the world I don't like").

Nor are the low wages of low-earners "externalities" in any meaningful sense. Sure, we’d all love it if everyone were more productive and could command a higher salary. But some people simply don’t have the skills necessary to justify such a salary. Employers offer, say, $8 an hour to flip burgers because placing someone in that position adds at least $8 an hour to the company’s revenues. As much as we might bemoan the low pay of these workers, they do willingly accept these jobs. It’s still a win-win, even though it’s not quite as big a win as we’d hope (on either end of the transaction, BTW).

What we have here are voluntary pairwise transactions in which both parties to the trade benefit. Nothing magical happens when you aggregate them across the economy. Aggregating the distributional data and plotting it or fitting a Pareto curve to it or calculating a Gini coefficient doesn’t conjure into existence a “problem” that was not there in the original interactions. You could try to tell some story about how the little guy is getting “screwed” by big, powerful players, but this is usually a refusal to acknowledge the reality. Those low-wage and low-salary workers have inherently low productivity.  It’s not like their employers are making a huge profit off their backs. (Who do you think loses their jobs first in an economic downturn: low-wage workers or high-wage workers? Doesn’t the “low-wage workers are underpaid” theory predict the opposite? Shouldn’t employer be loath to let go workers on whom they’re reaping such a huge surplus? This stops being a mystery if we acknowledge that there is no huge surplus and the low-paid workers are probably being paid appropriately for their productivity.)

Forget all that for a second and let’s posit that “income inequality” is some kind of externality. Who is the least-cost avoider? It’s surely not the high salary earners. Are they supposed to monitor the effort of low-wage workers? Are they supposed to judge whether they are making a serious enough attempt to acquire education and meaningful employment, and then to perform adequately once on the job? Do high earners acquire a larger and larger “monitor, judge, and redistribute” workload as they move up the distribution? The “inequality” framing gets this wrong by trying to make high earners responsible for the poverty of low earners. It’s not just that this indulges zero-sum thinking (though that too is a pretty serious error); rather it’s that if the low-wage workers perceive a problem with their income they usually have many levers of control. There are many options for that person to increase their own income if the figure on their W-2 is too low. The individual income earner is the only person who can observe their own level of satisfaction/disappointment with their own salary. S/he's also the only person who can accurately judge the attractiveness of various options (as in "I could go back to school but it'd be so boring the wage premium isn't worth it" or "I could put in the extra effort at work, but meh I got a good enough thing going on right now.") The least-cost avoider is in most cases the low-wage worker. He can say, “I’ll take another shot at acquiring education, and this time I’ll take it seriously.” He can say, “I’ll show up to work on time and try to work more effectively.” He can say, “I’ll choose a profession that offers more upward mobility.” No other person can weigh the relative costs and benefits of these options for him (not without making heroic assumptions and generalizations). 

Of course there are cases where these expectations on the low earner are unreasonable. An abandoned (perhaps battered) wife with small children probably needs external help. Or someone with a serious mental or physical disability. It would be dense to say, “Why didn’t you choose the medical profession?” to someone with, say, Downs Syndrome or a severe attention deficit problem. But, at least in my observation, these are a minority of cases. Many people in the “low earnings” group deliberately choose to remain there and reject easy steps toward upward mobility. I observe many people who were at some point in the same cohort as me: same high school, same college, same grad school, started work at the same time with the same number of exams completed, etc. Some of them lapped me because they were more ambitious, and I lapped many of them because of my own ambition. It’s a mistake to summarize as “inequality” the divergence of different people starting with the same level of opportunity. And it’s a bigger mistake to place the burden of “fixing” the “mistake” on the people who opted for higher earnings, as opposed to more leisure time or bigger families. 

_____________________________________

[Bad experiences have forced me to anticipate bad comments. I won't publish any comments along the lines of "Oh, so it's poor peoples' fault that they're poor!" If that's anyone's reaction, I'm afraid you misread the post and missed the point. My post is about income inequality, not poverty. Despite them being often confused or intentionally conflated, these are very different topics.]

Thursday, March 29, 2018

Hill Climbing Analogy to Drug Prohibition

Suppose we observe large numbers of people attempting to climb a tall, steep hill covered with thorn bushes. It's a mystery what they're after, but they all seem eager to reach some prize at the top. Some end up scraped up pretty badly. A few even die. But a lot of them make it up to the top without too much damage.

Someone says, "We need to put a stop to this. These idiots are hurting themselves. Let's erect a fence at the bottom of the hill!"Someone points out that this is a moronic idea. If people are willing to climb a bramble-covered hill, they'll be willing to scale your stupid fence. Maybe a few people will see the extra obstacle and that will be the final straw. The marginal hill-climbers, who were nearly indifferent to the hill climbing venture, will be nudged from a "yes" to a "no." But there is no way your silly fence will deter large numbers of people. Also, some people fall and get hurt trying to scale the fence, so this has to go into your cost-benefit calculus, too.

So you say, "Put some concertina wire at the top of the fence!" And someone points out that this is a losing game. Sure, it deters more hill-climbers. But most of this population of people, who were willing to climb a very high hill covered with thorns, are not deterred by some concertina wire. They scale the fence and snip the wire with wire-cutters. Or they throw a thick blanket over it and climb over that. Or some simply climb over the wire and take their scrapes. Some still fall and hurt themselves while scaling the fence.

Effectively, we are attempting to deter people from climbing the bramble-covered hill by placing another bramble-covered hill in front of it. This deters a few people from undertaking the venture in the first place, but the ones who venture on get twice as scraped up.

There is no way to square the circle here. To deter drug use, you must threaten some kind of harm to drug users. And to make the threat credible, you must follow through on your threat. Under pretty standard assumptions about demand elasticity, this is a losing game. Increasing the penalty (beefing up the fence, adding razor wire, electrifying it) increases the total harm to society. I think that advocates of drug prohibition have been incredibly sloppy on this point, failing to account in a serious way for total costs to society. In some vague sense, bigger penalties get you more deterrence. But we can be pretty confident that the price paid for that deterrence is too high.

Opioid "Epidemic": Some Relevant Time Series

The standard narrative for the opioid epidemic is superficially plausible when expressed in a sloppy narrative form, but it starts to fall apart when you look at the actual data. The overall story that "prescriptions went up, and subsequently overdose deaths went up" is clear enough. But the notion that increasing prescriptions inherently produces more addicts or drug abusers is wrong. Someone might infer from this simple story that a crackdown on prescriptions is warranted: "If prescriptions go back down, overdoses will go back down, too." Plainly this is wrong, too, given our experience in the 2010 - present period. So what's been happening over the past few decades?

Below are prescription opioid lifetime abuse charts from the book Lies, Damned Lies and Drug War Statistics.



The light circles represent lifetime non-therapeutic (recreational) use of prescription drugs. What's weird is that it's flat (maybe even declining) for the 1990s, the period over which doctors' attitudes about prescription painkillers were changing and opioid painkiller prescriptions were increasing. It suddenly spikes in the 1998 to 2002 period, then flattens out again. I'd say the spike is probably more likely a reporting bias or some kind of methodology change. It's likely that the true value was increasing over the period 1990-2002, since this is the first time large quantities of these drugs were widely in circulation. I wish that I could find "past month user" data for this period, comparable to SAMHSA data for the 2002 to present period. I don't know if it just doesn't exist or (implausibly) nobody has bothered to chart this.

See the charts below. These are SAMHSA survey data, 2002 - present, drug misuse and drug abuse disorder. I focus on the triangles, because this is the most inclusive grouping. Different age demographics move differently, but as a whole this is flat or declining for the 2002 - 2014 period.




The Monitoring the Future graphs tell the same story for the 2002 to 2014 period, but fills in details for the earlier period. It looks like there was an increase in past year use from 1990 to 2002, then a leveling off and a decline in very recent years. The discontinuity is due to a change in the survey question, in which they changed the examples of drugs in this category. I would probably "correct" the 1990 to 2002 numbers by drawing a straight line from 1990 to 2002 and saying this was probably the "real" trend. Note that availability seems to be declining. These surveys are aimed specifically at youths (8th, 10th, and 12th-graders), so they aren't directly comparable to the SAMHSA data. Still, roughly speaking the corroborate the story that illicit use of these substances was increasing in the 1990 to early 2000s and flat or declining from the early 2000s to present.


Directly below are opioid-related deaths, 1978-1998. This was under the ICD9 cause of death coding. It is not directly comparable to ICD10, which has different codes and allows for much more detail about the substances on the death certificate. At any rate, the trend is what we'd expect. It starts increasing steadily around 1990 or so.



The deaths for the 1999 - 2016 period are coded under the more detailed ICD10 system. We can see trends by various substances (or categories of substances). Again, there is a dramatic increase in "other opioids" (encompassing most opioid painkiller pills) from 1999 to 2016, a period over which recreational use rates were flat. Heroin overdoses started skyrocketing around 2010, when the "opioid epidemic" started getting widespread attention. I wonder if this is where a crackdown on pills started to take place, and perhaps some recreational Oxycontin users switched to heroin. Or maybe illicit fentanyl made street heroin cheaper (because it's so easy to smuggle/distribute), and it's an independent phenomenon.



The "standard narrative" of the opioid epidemic is that loose prescribing practices turned a bunch of unwitting patients into drug addicts. They either began abusing their own legal prescriptions or buying them on the street to use recreationally, and this led to an increase in overdoses. This story is inconsistent with the SAMHSA and MTF survey data, at least for the 2002 to present period. The volume of opioids prescribed roughly quadrupled in the 1999 to present period, leveling off in recent years (level or declining slightly since ~2010). The standard narrative would predict that this should have increased the number of illicit users. Certainly this can be said for the 1990-2000 period. But apparently you can massively increase the volume of opioids prescribed without creating new addicts. And apparently a crackdown (or perhaps just a leveling-off?) can lead to an even faster increase in drug poisoning deaths.

I get annoyed with sloppy journalism on the opioid subject. It usually reports the correlation between deaths and prescriptions in the 1999 to present period, and sloppily implies a neat causal relationship. Such stories usually feature some poor addict with a terrible drug problem, perhaps telling the reporter that they got hooked on a legal prescription and implying to the reader that this is the social phenomenon driving the increase in deaths.

Sam Quinones' book Dreamland has a long exposition about the "Xalisco boys", a Mexican drug cartel operating in the US and selling heroin. But the timeline is all wrong. Much of his story takes place in the 1990s, and the dramatic increases in heroin deaths didn't start until 2010 or so. His book came out in early 2015, so he would have at the very latest had 2013 mortality data (the CDC data for a given year comes out in December of the following year). Supposing he even looked at the CDC data, he would have seen just a couple of years' worth of the recent heroin epidemic. He never seems to tell his reader that it took over 20 years of relaxed opioid prescription practices to cause the heroin epidemic. Indeed, he gives his readers the perception of a steady rise. (Near the end of the book, he even concedes that a local crack-down on prescription opioids led to an increase in heroin overdoses. Instead of doing the requisite soul-searching and revising his priors, he simply insists that the crack-down was necessary. I can't place my finger on the passage at the moment, but it was so jarring it seared itself into my memory.)

I have a few other very long posts attacking the standard narrative, notably here and here. The broken link in their chain of reasoning is the flat recreational use rates, which according to the standard narrative should be rising.

Monday, March 19, 2018

Rules and Meta-Rules

Really interesting Econlib post here.

Basically, one economist says (paraphrasing, not actually quoting here), "I must be irrational, because I could switch to diet soda, which is cheaper and likely just as good. But I don't bother, because I'm stuck in some bad default mode. Even if it's not just as good, it's probably at least worth trying."

Another economist says, Hold on there! The brain power necessary for decision-making is a truly scarce resource. You aren't being irrational by creating some simple rules that economize on this resource, such as "always buy Diet Coke." (He says a lot more, but this is my blog so I'm summarizing my own take-away.)

This is an important and deep point. You can't simply re-analyze every single decision from scratch. You need some simple rules, such as "Leave work at 4:50", "Set alarm clock for 6:30", and "If we're low on our household stock of Diet Coke, pick up more at the store." You also need a set of meta-rules for resetting these rules of thumb. "If the boss complains about my punctuality twice, set alarm clock for 6:20 instead of 6:30." Or "If the expected benefit of changing from Coke to generic brand exceeds X, switch." The problem here is that you need a rule to even alert you to the possibility of changing a decision-rule. Maybe the benefit of switching from Coke to generic is enormous, so big that almost anyone would agree that it's irrational not to attempt the switch. But you don't just get to know that. You have to sit down for a moment, jot down some figures, grind out a simple calculation, and see the answer. Someone who did this all the time for every little decision would be almost paralyzed by their constant cost-benefit checking. (Flip over a used envelope, or grab a sticky note? Pen or pencil? Or just use Excel? GHAA!) So your meta-rules require not just a set of rules about thresholds that overturn existing rules, but for when you will even bother to check whether the threshold has been crossed.

This is a theme I've discussed before. I think the whole economic irrationality/behavioral economics thing is way overblown and overdone. Examples of people allegedly behaving irrationally usually have an explanation that is fully compatible with a "rational actor" model. Just add in, say, limited computing power or limited information and you get back to "imperfect but serviceable decision rules."  Of course some clever person can pick these rules apart and find that they are occasionally wrong or even that they tend to be biased wrong. Such rules can still be useful.