Wednesday, February 21, 2018

So You Want To Be Kept As a Pet?

[This will not be one of my more thoughtful posts.]

When people ask to be protected from competition from foreigners, I imagine they are really saying to the rest of society, "Keep me as a pet." The request for protection comes in two forms: import restrictions (protecting them from people toiling in their home country and shipping us the goods), and immigration restrictions (protecting them from people literally crossing the border and "taking" their job). It's like saying, "I can't compete, and I'm unwilling to take the pay cut necessary to keep doin' what I'm doin'. Please protect me from people who can do the same thing I can do only better and cheaper." It's basically asking the rest of society to subsidize the lifestyle you've grown comfortable with so you don't have to adjust to a changing world.

It's like asking the rest of society to "adopt" your factory or office building, pump money into it (even though it's become economically irrelevant or wasteful), and turn it into one of those historic villages where actors wander around trying not to break character while interacting with the tourists. Of course this isn't literally what happens. The propped-up office or factory surely produces some economically meaningful output, which props up the illusion that it's a viable company. But the unfettered economics suggest that the firm should close, and the workers and capital employed by that firm should go into other productive ventures. Propping up these dying businesses halts progress. The churn is sometimes painful, but people do adjust when the inevitable change finally comes. Like Deirdre McCloskey says, economic change is win-win-win-win-win-win-win-lose. The wins outweigh the losses, but eventually you do experience that loss and you adjust, perhaps entering an industry that didn't exist five years ago. To halt the losses is to throw out all those wins, all because a sympathetic-looking interest group asked to be coddled. In the reductio ad absurdum, we're all still toiling farmers plus maybe the rare skilled tradesman. Thank goodness we didn't get stuck there.

Law and Liberty Forum on Opioids: My Reaction to Caulkins

A few months ago, Jeffrey Miron (who has been something of an e-mail pen pal) asked for my commentary on this essay he wrote for Law and Liberty. It's a response to another essay on the same site by Robert VerBruggen. (In my opinion, the VerBruggen piece is incredibly wrong-headed and his narrative is wrong in some pretty basic ways. I'll respond directly to his piece at another time, but I think my post from last September  still holds up well.)

There are three other responses, and I'll try to get to each of them in time. For this post, I'll focus on this one by Johnathan Caulkins.

Caulkins pushes back against the argument that most people who try drugs, even hard drugs, do so responsibly and aren't harmed by them. He recasts the problem from being "proportion of problem users" to "proportion of total use that is problematic." As in, most who people try cocaine don't get hooked. But if you look at the proportion of incidents of cocaine use, or the proportion of cocaine going to problem users, it's very high. Probably a majority by his estimates.

In 1994, Jim Anthony and colleagues published what is still one of the most widely-cited estimates of what proportions of people who ever try various drugs go on to become dependent.[2] Based on data collected between 1990 and 1992 by the National Comorbidity Survey, their estimates for the three major “hard” drugs varied from 11.2 percent for stimulants (which includes methamphetamine but also weaker amphetamine-type-stimulants) to 23.1 percent for heroin. I’ll focus on the proportion for cocaine (16.7 percent) since cocaine was then the most widely used hard drug.
 The 16.7 percent figure does not mean that at any given time five people are enjoying cocaine for every one that is harmed by its use. People who become dependent often suffer through 10 or 20 years of dependence, whereas most of those who do not become dependent use for much less than a decade, and often only quite briefly. So the proportion of days-of-use that pertains to people struggling with dependence is much greater than 16.7 percent.
He dissects some survey data about how many times cocaine users have used during their lifetime. The result of his back-of-the-envelope calculation appears troubling at first glance:

[T]he odds for the average person who tries cocaine are an expectation of three days of misery per day of harmless fun.
Sounds like a pretty bad deal, huh? The implied lesson is that cocaine is more dangerous than it appears according to "addiction per user" ratios.
Thus a na├»ve interpretation of Anthony et al.’s “capture ratio” is that trying cocaine is like playing Russian roulette, with just one chance in six of disaster. But after recognizing that happy use is transitory and harmful use is long-lasting, the odds are effectively reversed. It is more akin to playing roulette with bullets in five of the pistol’s chambers, not one.
Pretty damning, right? I had just recently written about this topic based on the SAMHSA drug survey data. I had the first-blush common-sense reaction that most drug users don't get hooked and don't persist in their drug use. Caulkins is inviting us to flip the numbers by using a "problematic use per incident of drug use" ratio rather than a "problematic use per user" ratio. I think his analysis is wrong for some basic reasons, and his re-casting to per incident is a mistake.

Admittedly this is getting philosophical; I'm not accusing Caulkins of making a factual or mathematical error. But, as Daniel Dennett says, "There is no such thing as philosophy free science. There is only science whose philosophical baggage has been taken on board without examination." So let me briefly play the role of the probing, groping TSA agent and see if Caulkins has inadvertently snuck something past us. Let's examine away.

Free Will

First of all, doing something that's potentially habit-forming is not like a game of Russian roulette. There isn't a flipping coin, tumbling die, roulette wheel, or spinning barrel of a six-shooter inside our heads. Human beings are sentient. They consciously decide whether to take risks or avoid them. They consciously (or unconsciously) weigh costs and benefits. The person who gets hooked on cocaine makes a series of decisions. An initial decision: "Hmm. I've heard this thing has a bad reputation. I'll try it anyway." As Caulkins himself concedes, most people make it through this step unscathed. There is a subsequent decision to use again: "That felt really good, I think I'll repeat." Or (again, far more typical): "No thanks." Somewhat paradoxically, a really good first experience can lead to a total swearing off. Drug users often quite rationally recognize that a continued dalliance with the pleasant substance might result in a habit that's hard to control. A sort of "That was good. Too good." reaction. Someone has to really indulge repeatedly and quite deliberately to turn it into a bad habit.

See my post on Unbroken Brain for more of these details on drug addiction. Drug users are mostly rational; they don't get ensnared in the "chemical hooks" of the substances they imbibe. I'll admit that it would be pig-headed of me to try to ignore (in my argument) the temptations imposed by chemical dependence and the fact that some people find these temptations irresistible. I think it's equally pig-headed to ignore the fact that most people with a chemical dependence do in fact overcome their addictions and get their lives in order. They choose to do so. They decide to make the change, despite the temptation. I'm not denying the existence of drug addiction or ducking the point here. I'm just trying to put some proper context around the phenomenon of drug addiction.

Caulkins Dismisses Too Many Causal Users

Caulkins presents a useful table showing a breakdown of how many times "lifetime users" of cocaine have actually used. 29% only once or twice; 16% three to five times; 15% six to ten times. (The survey asked on how many days they used, not how many times they used; an evening-long coke-binge in which you bumped 20 times counts as one day of use.) So fully 60% of "lifetime users" have only used it ten or fewer times. As long as Caulkins would grant that the "not even once" propaganda is overblown nonsense, I'll grant that we might want to ignore people who have only touched the stuff a few times. This population was likely never "at risk" because they never used persistently enough. (Then again, see my caveat above about drug users rationally swearing off something that's "too good" after only one or a few uses. I have heard second-hand stories about people doing this, so it can't be too uncommon. Such persons might even describe themselves as having once been "dependent.")

But Caulkins takes this way too far. He points out that anti-tobacco activists ignore people who have smoked on fewer than 100 occasions. That makes perfectly good sense to me with respect to tobacco. But consider someone who used cocaine 20 times. That could be one coke-fueled outing every weekend for the better part of half a year, or every other weekend for the better part of a year. Someone in that category could be said to have dabbled significantly. And someone who does imbibe with that kind of frequency might develop a mild "dependence" or at least a craving for the habit. That's not really frequent enough to do serious cumulative damage (long-term cocaine use damages the heart, among other things). But they might be represented in the "16.7%" figure that Caulkins cites. I think Caulkins is loading his figures by trying to dismiss the all but 14% who have used on 100 or more days in their life. The 40% who have used 10 or more times are fully in play, in my opinion.

From the paper that the 16.7% figure comes from:
[D]ependence was assessed whenever participants reported at least several occasions of extramedical drug use, under the assumption that even as few as six occasions might be sufficient for development of drug dependence, but that drug dependence would be extremely rare or improbable among persons who had used the drug no more than several times.
There Are Gradations of "Dependence"

Caulkins invites us to imagine worst-case scenarios:
People who become dependent often suffer through 10 or 20 years of dependence, whereas most of those who do not become dependent use for much less than a decade, and often only quite briefly.
"Often?" How often? Half the time? Once in every ten? Dependence is just like any other social problem. There is a distribution of severity, with the most severe instances being the least common. The better part of the 16.7% are probably people who remember using a little too much, or perhaps remember a few genuine problems caused by their drug use which quickly prompted them to stop. Most people who become full-fledged addicts age out of it by their late 20s or early 30s. A decade is a typical tenure for someone who's already become an addict, according to various other sources I've read (Unbroken Brain, High Price, sorry I don't have specific academic references handy for this "stylized fact").

His comment about "three days of misery per day of harmless fun" is more than a little bit hyperbolic. No doubt some addicts are completely miserable. But I'll bet that many of the people who strictly meet the criteria for "dependence" still at least somewhat enjoy their habit, even if they recognize it's bad and wish they would stop.

Selection Bias

It's worth keeping in mind (as Caulkins quite appropriately reminds us about halfway down the page) that this data comes from within a regime of drug prohibition. The sample of individuals who imbibe in a prohibition regime is very different from the sample who would imbibe under full legalization. These are people who are disproportionately likely to be risk-takers. By definition, they are people who choose to break the law. We are constantly inundated with information about how dangerous and addictive these substances are. Pause and think about what kind of person ignores these warnings and imbibes anyway.  People who have impulse control problems are going to be over-represented in this sample of the population. People who don't generally have their lives together (unmarried, marginally employed, no dependents or perhaps neglectful of their existing dependents) will be over-represented here. If you have a normal job and family life, certain patterns of drug use are ruled out of the question. If you look at a population where these things are missing, you're going to see a disproportionate number of addicts and persistent drug users. Of course most people who have been users (even of hard drugs) are not dysfunctional, but any population of illegal users is going to have disproportionate numbers of dysfunctional adults. You can't simply apply numbers from this population to the general population and speculate that it's a reasonable estimate for what would happen under full legalization.

The Substances Themselves Differ Under Prohibition Versus Legalization

Bolivian Indians chew coca leaf all day long. They do not inexorably escalate to powdered cocaine or crack. Presumably this is closer to the model of "legal cocaine use." Or look at another class of stimulants. Compare attention deficit disorder medications to methamphetamine. They are substantially the same substances (in fact government surveys and death statistics count them in the same category!), but school  children with ADD prescriptions spend significant portions of their day (every day) under their influence. They don't inexorably escalate to smoking or injecting methamphetamine.

Under legalization, there would likely be some coca tea drinkers and perhaps leaf chewers (lozenges? nasal sprays? tinctures?). But few would escalate to pure powdered cocaine. We likely would not have many more intense users than we currently have. More likely, we'd fill in the lower-dose-but-more-frequent-use left tail of the distribution, which full-fledged prohibition chops off. It's doubtful that the right-tail of intense frequent use would expand much if at all. You might get the occasional tea drinker who occasionally mixes his brew strong enough to get a mild buzz, much like the caffeine buzz you'd get from a tall cup of Starbucks.

The distribution of "days used in lifetime" would probably expand rightward, putting more people in the categories of more frequent use. At that point, we could talk about dropping people who used on 100 or fewer days. But I think that kind of data-censoring is inappropriate given the regime the data comes from.

Adjusting for Implausible Results

Look at the paper that the 16.7% figure comes from. See Table 2 on the 8th page of the document. So supposedly 4.9% of past psychedelic users developed a dependence? 9.1% of marijuana users? Some kind of "bullshit implausibility adjustment factor" needs to be applied here. Psychedelics and marijuana don't cause physical dependence or withdrawal. Any perceived dependence is psychological, and no more concerning than an "addiction" to video games. Maybe these drugs were the vehicle by which some people chose to harm their lives, but it would be unfair to blame the drugs for the problems of people with poor impulse control or other unrelated problems. I made this point on my "Persistence of Drug Use" post (linked to above).

I think what's going on here is that people are recalling their drug use as a "youthful indiscretion". Perhaps many of them are embarrassed about their former habit and recall it as being more harmful than it really was. Supposedly there were quality control checks in place to get accurate measures of "dependence" according to the DSM III definition (read the paper for details). But the psychedelic and marijuana numbers indicate, to me at least, that some kind of misreporting is creeping in. People who are asked about their drug use, years later when they are older and wiser, likely misreport how bad it was.

Picking the Relevant Base for "Exposure"

I'm an actuary, so I'm keenly aware of the problem of "picking the relevant exposure base." If I have a population of 1000 cars, all else equal it will have twice the accidents as a population of 500 cars. If I have a sample of 1000 cars for 2 years, all else equal there will be twice as many accidents as 1000 cars for 1 year. In fact "car-years" is a standard unit of exposure. Then again, I could pick "households" as my basis for exposure. Some households have an old beater than never gets driven plus two or three cars for regular use. The old beater isn't as exposed to risk as the others. Not all car-years are created equally, but then again neither are all households created equally. Perhaps I could use "miles driven." A car that drives twice as many miles, all else equal, will have twice as many accidents. But highway miles are safer than city miles. So maybe "equivalent highway miles driven", something that re-casts all miles driven to an equivalent number of highway miles. Or maybe I just use "vehicle-years" and adjust each individual exposure for risk factors: the guy who drives 6000 miles and the guy who drives 1000 miles each gets one "car-year" of exposure, but the first guy gets a factor of six adjustment when I calculate his accident risk.

There are different ways of doing this, some equivalent to others. But I think using "incidents of use" or "days of active use" as the exposure for "risk of addiction" stacks the deck in a way that a "per lifetime user" basis does not. Likewise, most casino-goers don't have to worry about developing a gambling addiction. But if you recast your base as "per dollar gambled", you'll find a much larger proportion (maybe a majority?) are being gambled by people with gambling problems. If you're trying to assess a priori risk, you want an exposure base that causes the risk to rise linearly as the exposure rises. It would be silly to use, say, "dollars of insurance claims paid" as my exposure base, because this restricts us to automobiles that have already been involved in accidents. Likewise, the problem with addiction is that if you do it a little too much, you will become "captured" and end up doing it a lot too much. The if the exposure base for the social problem you are trying to measure (be it auto accidents or drug addiction) skyrockets when a problem occurs, it's a bad exposure base.

I'll applaud Caulkins for raising an interesting point about what basis to use, but I don't think it's at all clear that the "per days of use" basis is the relevant one. It depends on the question you're trying to answer. "I'm offered cocaine for the first time. Should I try it?" I think the "per lifetime user" basis is the relevant one for answering this question. "I've tried before, and I have the opportunity to acquire some tonight. Should I?" Maybe the "per use" basis starts to look more relevant for this kind of question, especially for the tenth or 20th offering. I think the "per days of use" basis comes dangerously close to being a tautology. Caulkins cuts off the left tail of the distribution (too much so, I argued above) by claiming that those infrequent users aren't really exposed to addiction. Then, having censored the data to only include the right tail of the distribution, he argues that most of this cocaine use is done by people with addiction issues. Of course he brings in data on what fraction of lifetime users experienced dependence (the 16.7% figure cited above), so it's not literally a tautology. But if frequent, persistent use is part of the definition of dependence, we're still trapped inside a tautology. As in, "Let's define persistent, repeated use as problematic. Oh my goodness! Low and behold, most drug use is problematic!" Breaking the tautology depends on the independence of the "drug dependence" question and the "persistent use" question. If these are strongly linked by definition, as I suspect they are, we're stuck in tautology world.

By the way, this is hard. I struggled with the issue of "what exposure base to use" in a previous post. Suppose I want to know the mortality of cocaine users. Are the "past year users" all at risk? Or just the "past month users" who presumably have a more serious and persistent habit? Let me just reiterate that I am not at all faulting Caulkins for raising this issue.

It's Hard to Deter Self-Harm

Suppose I'm wrong about everything and cocaine use really is the three-to-one game of Russian roulette that Caulkins thinks it is. Does that support the notion of drug prohibition? Of course not.

The problem with drug prohibition is that it eats its own tail. It requires the implausible dueling assumptions that drug users are irrational enough to ignore the risks of drug use, but rational enough to be deterred by legal penalties and the paltry price increase imposed by prohibition. ("Paltry" because the full price includes all those nasty risks in addition to the actual dollar price tag.) If you actually try to model this out by stating your assumptions clearly, you find that it doesn't work. Someone who is willing to play a 3-to-1 game of Russian roulette is someone who is not going to be deterred by a legal slap on the wrist, an increase in the market price (even a severe one), or the search costs required to find a dealer. If the bulk of the "cost" of cocaine use is embodied in the inherent pharmacological cost (risk of addiction and self-harm from continued use), then drug prohibition is unlikely to meaningfully deter these users. He cites the example of marijuana legalization leading to a massive increase in daily usage, but this is a distraction. Marijuana is not harmful or addictive, so it's actually plausible that prohibition causes significant deterrence. The legal penalties, higher market price, etc., are a significant component of the total cost in the case of marijuana. Not so in the case of cocaine, if we're to believe Caulkins' estimates of the risk of addiction. Make whatever assumptions you like about drug users, but keep those assumptions consistent. They're irrational? Cool, I can buy that. Then they won't be rationally deterred by anti-drug laws. They're rational after all? Cool, then their drug use must be a rational decision that you simply fail to understand. They rationally respond to legal sanctions while irrationally responding to the pharmacological risks? No, now you are confused. Specify what the demand curve looks like, but once you've done so stick with it and spell out the implications.

Caulkins wrote a very thoughtful essay, and it has given me quite a lot to think about. I just don't buy his bottom line (about cocaine, anyway). I'll try to respond to other essays in the Law and Liberty forum as I have time.


I felt a need to respond to this part of his essay:

Second, I concede that prohibition harms many people, probably more than it helps. However, it harms most of them only modestly, whereas some whom it protects benefit enormously.
This strikes me as a pretty blithe dismissal of the suffering caused by drug prohibition. Are we shifting back to a per-person exposure base? I guess if the median person "harmed by drug prohibition" is the casual user who can't score any, or (to use an even broader base) the tax payer saddled with the bill for a useless and ineffective drug war, this statement is literally true. But just as there is a thick right tail to the distribution of cocaine-related harms, there is a thick right tail to the distribution of drug-war related harms. That is, there are infrequent but severe cases of harm that likely dominate the total harm, by any reasonable accounting. Let's fixate on the innocent people whose homes are unnecessarily raided, the people languishing in prison because they triggered a mandatory minimum over an arbitrary weight limit, the families destroyed by the incarceration of their loved ones, the communities destroyed because incarceration has imposed a lopsided male/female ratio, the people overdosing on heroin or cocaine tainted with fentanyl (yes, that is the fault of drug prohibition, as much as the drug warriors would love to take a pass on this one).

Pardon me for dwelling on this, but what a lopsided comparison. Read the second sentence in that excerpt again. Are we comparing the median person "harmed" (implied by his use of the phrase "most of them") to the very most extreme cases of "drug abuse averted" (implied by his use of the phrase "some of whom")? If we do a proper cost-benefit analysis, weighing all costs against all benefits, Caulkins would have a very hard time justifying cocaine prohibition.

Saturday, February 17, 2018

Thomas Sowell’s Farewell Letter to His Secretary

In Man of Letters, Thomas Sowell publishes many of his personal letters.  It is a very engaging read, and it gives you a real flavor for  his thinking and his influences.

One letter is to his secretary Beverly, who recently quit (retired?). It's clear from Sowell's very heartfelt letter that he is sad to see her go:
I am of course very sorry to lose a very good secretary. But I have also gotten to know you somewhat over the past year or so, and if I may consider myself a friend, then as a friend I think you may have made the best decision. Just this past weekend I expressed my concern to my wife that you seemed to be making the job far harder on yourself than it needed to be, partly by trying to shape my decisions instead of simply getting me the information that I needed to make my own decisions. She suggested that I take you to lunch and air our different conceptions of the work. But, by the time I reached the office on Monday, you had made your decision.
Emphasis mine. I think this is a common employer-employee dynamic. The employee is trying too hard to shape the decision-making (beyond their actual mandate to do so), while the employer just wants the necessary information to make a decision. Sometimes it’s even cynical. The employee tries to influence the employer toward the decision that will require the least effort and headache (for the employee). The employer senses this and has to push back through the employee’s manipulation and stonewalling. Sometimes it’s sheer ego, as in the employee thinks they know better and wants to be the boss. And of course sometimes the employee does know better, and the boss’s boneheaded decision really does blow up in everyone’s face even though s/he tried to warn him.

This is a slightly different variation of something I wrote about in a recent post. It’s not specific to work relationships, either. I think it could be at play in any power dynamic (parent-child) or even between equals (partners in a firm or project). I feel like I’ve been on both sides of this conflict.

Some Quick Advice

Download a couple of good podcasts that you want to listen to. You’re reading my blog at this moment, right? Surely there is some podcast that’s just as good. Or maybe an audiobook or some talks or lectures on Youtube. Or maybe even some music. Got it? Awesome. Your opportunity cost for doing household chores is now very close to zero. You can wash dishes or clean the cat litter or declutter the surfaces in your home or dust or do laundry or clean bathrooms. Your spouse will appreciate it, and you’ll feel productive. You might even feel good about doing it. I usually end up enjoying the feeling when I’m immersed in a productive task, even something simple like house work. The "switching cost" or "activation energy" (getting started in other words) is sometimes rough, but once something gets started it's not that bad. It doesn't feel like work. Go forth! Sometimes I even take my own advice on this one. 

The Power of Mutual Knowledge

There’s a puzzle I first encountered on Steven Landsburg’s blog “The Big Questions.” It involves an island of 100 blue-eyed and 100 brown-eyed natives being visited by a foreigner. There is a strictly observed religious tradition to never talk about anyone else’s eye-color, and to commit ritual suicide within a day if you ever discover your own eye color. (There are no reflective surfaces on the island.) But of course everyone can see everyone else’s eye color. Everyone with blue eyes knows there are at least 99 blue-eyed people and 100 brown-eyed people, just as everyone with brown eyes knows there are at least 99 brown-eyed people and 100 blue-eyed people. They just don’t know their own eye color. A foreigner (who happens to have blue eyes) arrives by boat, spends several months visiting and learning their ways, then sails away. Just as he leaves, he says, “Well, how interesting that there would be blue-eyed people in this part of the world!” And he sails off.

At first glance, he didn’t tell them anything. “Of course, everybody already knew that there are blue eyed people on the island! The foreigner’s statement adds no information.” But if you work through the puzzle, you discover the surprising result that everyone commits ritual suicide on the 100th day. It's a subtle story about mutual knowledge slowly creeping in and eventually having horrendous consequences. (Note that Landsburg is making a very different point than I am.)

I have two dueling thoughts on this. My first thought is, “This is way too complicated for anyone to actually figure out. Nobody is smart enough to actual work this out and figure out their own eye-color. In the real world, everyone would be safe.”

My second thought is, “Social life is unimaginably more complex than a simple rule about eye-color and ritual suicide. Of course people are constantly working out complex implications of mutual knowledge. Of course blurting shit out makes people uncomfortable. It may only 'reveal' information that everybody knows. But it reveals that everybody knows that everybody knows that everybody knows, ad infinitum.”

Imagine saying something unflattering about a coworker. “Everyone in this room knows you’re not qualified.” Everyone, including the accused, might already know, and everyone might suspect that everyone else already thinks it. But plausible deniability has been taken away. Now every time this coworker looks someone in the eye, he’ll see shame staring back at him. The boss, who was willing to tolerate the under-performer out of pity, doesn't have plausible deniability when someone asks, "How can you keep him on your team?" The coworker who was picking up the under-performer's slack feels emasculated if he continues. Everyone could live with the uncomfortable truth before it became mutual knowledge. It doesn't have to be such an obvious accusation, either. More in line with the puzzle, it could be a snippy comment about someone not carrying his weight. It's obvious enough who the target was, so mutual knowledge seeps in. 

You could think of other examples. You're in a group of friends, two of whom have an obvious mutual crush, and perhaps another friend in the group is jealous. Maybe everyone knows this dynamic exists, and maybe everyone suspects that everyone else knows. But blurting it out would be really uncomfortable. Even someone who indirectly hinted at it (perhaps with a light joke or teasing) might be scolded or shamed for creating an awkward moment. If you don't viscerally feel the discomfort of this scenario, think about how the group might split into factions. The jealous rival might feel compelled by shame to avoid the flirting couple. Other friends might feel compelled to choose between factions. Even when everybody knows and everybody suspects that everybody knows, everyone still has plausible deniability. 

In the same vein, merely stating that "some people" have cynical attitudes and do illicit activities may implicate you. In an alternative version of the puzzle given above, there is a society of 100 couples. Every husband cheats on his wife, and every wife knows about every infidelity of every other woman's husband (just not her own). In this version, she must murder her husband within 24 hours if she figures out he is a cheater. By the same logic as the blue-eyed and brown-eyed islander story, if some incautious outsider blurts out what everyone already knows, something awful happens. On day 100, all the cheating husbands die. 

This isn't about trivial matters of social faux pas and embarrassment. Dictators don't like crowds, because crowds tend to turn into angry protests. And these reveal to the world that, yes, everyone else is dissatisfied with the status quo. It's hard to maintain the fiction of a "100% approval rating" or a "bountiful harvest" in the light of this kind of public demonstration. Nicolae Ceausescu was brought down when people started chanting at a public speech and he lost control of the audience. The Arab Spring seems like another example of this. Why wouldn't Hosni Mubarak just sit in office and hold power? Why not just ignore the protesters and wait it out, like American presidents do all the time? I think this "mutual knowledge" dynamic is at play and cracks the armor of a dictatorship much more than it does in a democracy.

I am stealing some of these ideas about mutual knowledge from a Steven Pinker book, though at this point I couldn't even tell you which one. The Blank Slate? Or maybe it was How the Mind Works.

Are there other good examples of this dynamic at work?

Think about an island with one blue-eyed and one brown-eyed person. On this island, the foreigner’s statement would cause the blue-eyed person to discover his eye-color. The blue-eyed person knows that the other person’s eyes are brown. Knowing he must be the blue-eyed person, he commits ritual suicide. The brown eyed person, seeing this, realizes that he must have brown eyes, or the blue-eyed person wouldn’t have discovered his eye-color and killed himself. “If I had blue eyes, he would have waited a day..”

Now think about an island with two blue-eyed and two brown-eyed people. The blue-eyed people know there’s at least one blue-eyed person. The brown-eyed people know there are at least two blue-eyed people. The foreigner’s statement might first cause each blue-eyed person to think, “Oh, he’s talking about that blue-eyed person. If that blue-eyed person sees three brown-eyed people he’ll commit ritual suicide within 24 hours. If not…” So when the blue-eyed person doesn’t commit ritual suicide within 24 hours, the other blue eyed person says, “Uh, oh. He was talking about both of us!” This is symmetric. They both commit ritual suicide. The brown-eyed people have worked this out, too, and so they know there were 2 blue-eyed people, not three. This allows them to work out that they must both have brown eyes.

Now think about an island with 3 blue-eyed and 3 brown-eyed people…work this one out yourself. By induction, this process keeps going. “On the Xth day, all X blue eyed and all X brown-eyed people commit ritual suicide.” And all because one loud-mouth visitor blurted something out.  

Or think about it this way. Obviously if there's only one blue eyed person on the island, the foreigner's statement that there's a blue-eyed person reveals that person's eye color to him. Ritual suicide.
Given this, if there are two blue-eyed people, the foreigner's statement will reveal that there's at least one blue-eyed person. On day 2, each blue eyed person works out that the other sees a blue-eyed instead of a brown-eyed person and commits ritual suicide.
Given this, if there are three blue-eyed people, after day 2 each blue-eyed person works out that there must be three blue-eyed people.
And so on. There's no magic number where this induction stops working.

Thursday, February 15, 2018

The Wikipedia Test

The "illusion of explanatory depth" confuses us into thinking we understand things at a deeper level than we really do. Simple stuff like "How does a tiolet work" or "How does a bicycle work" tends to stump us when we're asked specific questions about the mechanisms. Same goes for political topics and things on the news.

A decent test of your understanding is to look up the Wikipedia (or good ole' encyclopedia) entry for a topic that you have strong opinions about, and see if there's anything that's mind-bogglingly surprising to you. If you're finding a lot of surprises, and they seem to check out (check references! The Wikipedia is fallible!), then you probably didn't understand the topic as well as you thought.

I remember reading the Dakota Access pipeline Wikipedia page and being floored by the extent to which the builders had received voluntary easements. Apparently, to a very large number of people whose properties were affected, this was a pretty unobjectionable project (given appropriate compensation). I wish the people waxing wroth on my Facebook feed would have gone through this exercise. It might not have made them "pro-pipeline", but it would have made them re-think whether this was the world's greatest injustice.

Or do "the Google test". Simply look up the first few Google hits. Maybe search for "best arguments for/against..." Again, if there are a lot of surprises here, consider that maybe you need to do some reading, because you didn't understand your topic so well after all. I recently found that it was very easy to get the canonical list of supposed "non-neutrality" transgressions by internet service providers. I also found that this list completely falls apart when you look at the examples in any detail.

Maybe I'm mistaken about these topics. But the exercise of doing some research (even cursory research) on the topic that excites you is bound to yield some interesting surprises. Pick something that's been in the news, something you think you understand well, and start digging.

Latent Knowledge and Maps of Knowledge

Bryan Caplan’s latest book, The Case Against Education, is very engrossing. Also depressing. The most depressing piece of the book is the section that describes just how much we forget. When tested even months after the final exam, people seem to have lost most of what they learned.

I’m skeptical. I’ll start by describing my experience with the actuarial exams. These are four-hour exams that people typically spend four or five months studying for. (The pass rate is something like 50% for a good sitting.) There is a broad syllabus covering, say, 15 to 25 papers or textbook chapters. I would be able to work my way through the entire syllabus maybe four or five times in my 4+ months of study.

On the first pass through, it literally feels like you learn nothing at all. You read over the paper and an associated study guide, look at some practice problems, and go “Huh?” Almost nothing sticks. It’s hard to conceive of any test that would pick up the meager knowledge-gain of this first pass-through. Maybe you could vaguely detect that students pick up a concept or two on this first pass. Thoroughly confused, you move on to the next paper, and so on until you’ve done a first pass for everything in the syllabus.

On the second pass, you think, “Oh, this looks mildly familiar. But I don’t remember what the hell any of this is about.” But something magical happens. You say, “I understand it now.” You do a little bit better on some of the practice problems. Then you move on to the next paper, for which you find yourself having a similar “Ah ha” experience.

So plainly it isn’t possible that I learned nothing at all on the first pass-through. I would have loved to skip straight to the “Ah ha” of the second pass. I would have paid a fortune to skip that painful, humbling, slogging first pass. But plainly I had to go through this step. Clearly I was learning something that allowed the second pass to be more profitable.

I think much of what we learn and then forget is like this. I couldn’t necessarily pass any of my college or grad school physics exams. But I can pick it up again if I ever need to. More to the point, I can pick it up quickly and easily without a first slogging pass through it. I wonder how important this “latent knowledge-building” truly is. I’ve learned subject matter that I had never studied in school, so it might not be all that important after all. Surely someone has studied this concept. I wonder if Caplan came across any research on it? It seems like some critics of his book have brought it up, but I’m not sure Caplan has referred to research exploring/debunking this latent knowledge theory of education.

In addition to this latent knowledge, maybe we retain "maps of knowledge" after we forget the bulk of the subject matter. I may not remember how to do all varieties of calculus problems, but I know whether some problem I come across calls for an integration or a Lagrange multiplier. I can look up the appropriate textbook chapter. (BTW, I may have that rare one job in ten thousand that ever calls for these things, and then only rarely, and even then I use a computer to do it for me after setting up the problem.) It probably helps to "know your way around a topic." Then again, to Caplan's recent Econtalk exchange with Russ Roberts, you could probably design a test for this. "Which technique is most appropriate for this problem..." "Which textbook would you reach for if presented with this problem..."

None of this impugns Caplan's overall thesis. I still think we're all over-schooled, we forget too much, and we waste time on silly or useless topics. Caplan's guess that education is 80% signalling is probably a good estimate. Those actuarial exams I mentioned? 90% useless. It's just another long vetting process. "Can you pass these exams? (Stamps forehead with "Grade A".) Awesome, you get to be an actuary! Can you pass these exams, too? (Stamps forehead with "Grade AA".) Cool, you get an even better job as an actuary!" Or maybe I'm wrong and the latent knowledge and knowledge maps are really important. I just get the strong sense that most of the official actuarial
 syllabus is rarely or never used.