Tuesday, January 31, 2017

A Post About Fight Club and Sample Bias

Imagine if Jack from the movie Fight Club were to lecture you about auto safety.

In the movie Fight Club, Ed Norton (aka Jack) is an actuary who investigates automobile crash scenes in order to estimate the auto manufacturer’s liability.*  Imagine such a person lecturing you about the dangers of driving. All of his most vivid experiences involve real-life car accidents, some of which involve significant carnage. He could probably go on for a very long time stringing together one anecdote after another. He could justify each rhetorical flourish with another example of a family being horribly maimed or killed. You might get skeptical and say something about how you can’t judge the risk of driving just by looking examples of fatal or near-fatal car crashes. That’s a biased sample, to say the least. Obviously you’d want to start with the full sample of all drivers or all car-trips, and estimate the risk of a bad accident as a proportion of this greater total. But by the time you've managed to express this idea, he shuts you down with another vivid anecdote from just the other day in which a family was burned alive in their "safe" automobile. Driving is safe, indeed!

But of course you would be right to ignore his bluster and consult the actuarial tables to quantify the true risk. Generalizing based on the most vivid possible examples is generally a bad idea. If you know that someone has been gathering and assembling the worst 1% or the worst 0.01% of examples of something (or however far out in the right tail you want to go), they will be a biased source of risk information.

Switch the topic to drug legalization. Enter law enforcement and substance abuse treatment personnel and anyone else who deals with society's problems. "You naive drug law reformer, you simply do not know all the horrors I have seen," they might start. And they go on to regale you with example after example, not realizing that the vast majority of the relevant sample is hidden from their view. Of course, people with substance abuse problems are most likely to attract the attention of law enforcement and medical personnel. If you want to know how to deal with specific bad outcomes, these would be good people to consult for their opinions. (Sometimes, sometimes not...I have heard some of these people show very bad judgment even here in their supposed domain of expertise). But if you are trying to determine optimal drug policy, you should have some sense of how the typical potential user response to the various risks and hazards of drugs. Your sample will contain the millions of people who dabble for a while and never develop a habit, or develop some kind of a "habit" but it never becomes a problem. Beware the problem of sample bias. It is lurking everywhere. A skilled demagogue, or even someone who is honest but oblivious, can really mislead you if you aren't careful.

All this is simply to point out the issue of sample bias. Forget for a moment that many of those overdoses, blood-borne pathogens, and other problems are themselves products of prohibition. That fact will also be hidden from the view of someone who simply looks at the screw-ups and tries to extrapolate from there. The non-problem users in his (the cop/E.R medic/social worker's) own world are hidden from his view; the non-problem users in the counterfactual world of rational drug policy are doubly hidden. These law enforcement or medical professionals could at least see some kind of survey evidence for the existence of the hidden population of non-problematic drug users. But they can't "see" how much better the problem-users fair in a counterfactual world where clean needles are freely available, drugs are cheaper and thus don't require property crimes to support a habit, and drugs are of pharmaceutical grade and known purity (thus leading to fewer poisonings). "Seeing" in this way requires the disciplined use of logic and statistics, and somebody who is blinded by vivid anecdotes won't be able to do it. Since I've discussed these issues in other posts, I won't rehash them all here. I just wanted to point out that sample bias isn't the only problem contributing to these folks' misunderstanding of the issue.

Scott Alexander makes a similar point about sample bias in this excellent post. No, you can't just pile a bunch of horrific anecdotes on top of each other. You have to know something about the base rate. You have to know how large a sample you are digging in to dredge up the bad outcomes.
___________________________________________________________________

* He describes his work in the following excerpt: “Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X. If X is less than the cost of a recall, we don't do one.”
At this point in the movie you're supposed to imagine that the manufacturer is a cynical monster, a hulking avatar of capitalist excess and greed. But actually the formula is about the right criterion for issuing a recall. It might be prudent to make the cutoff "X times 1.5" or "X times 3" or something more conservative than simply "X", but there is no way that the value of X is completely irrelevant to the recall decision. Anyway, that's an argument for another post.

Monday, January 30, 2017

It’s Going to Be Okay

The sun will rise tomorrow. Buildings will not fall. You’ll get up, go to work, go back home to hang with your people, then go to bed again. A lot of stuff will happen that you can’t do anything to stop, but then that’s always true. We tend to put way too much thought into what government does and way too little into how to make our own lives better. Which is ridiculous considering that we have so little control over the former and so much control over the latter.

Do you want these next four years to not suck so bad? It’s not that hard. Be nicer to your spouse, kids, parents, and friends. Do more helpful chores around the house to take the load off of other people. Play with your kids more. Leave each day a little bit better than you found it.

Do the same at work. Hustle a little more. Learn a new skill, even a new career. Consider taking some online classes or doing some kind of professional exams to improve your resume a little. There are probably little things you can do that will pay off a little, and there are big things you can do that will pay off big. Find out what they are and do them.

Are you stretched too thin already? Maybe do the opposite of what I’m saying and give yourself a break. I certainly know people who go to both extremes, that of “too much” and of “too little” effort.

None of this is to say that bad government is irrelevant to the quality of your life. Governments can do horrible things that make your life miserable. My point is that it makes sense to allocate your attention and resources to the things that you have control over. There are things you have little or no control over, and sometimes these things make for interesting hobbies. I myself am a voracious reader on all things political, so if it sounds like I'm making "politics" into a vice then it's a vice I indulge to the hilt. I'm an infovore, and I enjoy the intellectual exercise of policy analysis. But I don't let it distract from the ways I can improve my own life. There are many levers to pull and dials to adjust that affect the quality of your life, and most of them are thoroughly under your control. 

Friday, January 27, 2017

How To Think About Government Regulation

Suppose a customer wants some kind of safety feature or quality assurance on a product they are buying. Say it’s a warning tone in an automobile, or a quality inspection on apples, or whatever else you can imagine. Suppose the customer is willing to pay up to $5 extra to add this feature. If the feature costs less than $5 to produce, the company will obviously add it to the product. If I can spend $2 on added quality and charge the customers an extra $5 per unit, that’s an extra $3 per unit I can pocket. (Or, more likely you’ll charge somewhere between $2 and $5 and share some of the surplus with the consumer. Anyway, re-write the example with whatever numbers you prefer.) On the other hand, if it costs more than $5 to produce the feature, the customers don’t really want it. They aren’t willing to pay the cost of production.

If you think this through, you start to realize that government regulation is gratuitous. If a regulation “forces” a company to produce something that the customer is already willing to pay for, that regulation is completely redundant. A free market will already supply that particular “regulation.” If a regulation forces a company to produce something that the customer is not willing to pay for, it’s actually doing harm. What this regulation is really doing is forcing the consumer to bear the cost of something they don’t actually want. If you impose an extra, say, $10 in production costs on something by adding a feature that the customer only values at $5, you’ve destroyed $5 in value per unit. You’ve harmed the consumer by $5 for each unit purchased.

This ignores “externalities,” or costs to third parties. The consumer might not want to purchase a $20 pollution-mitigating mechanism on their car even though the benefit to society is, say, $50, because the consumer only gets a tiny fraction of that benefit. Regulation of such "externalities" can in theory be justified because everyone wants everyone to have the pollution-mitigating device even though everyone is tempted to cheat by not buying his own. But my above discussion certainly applies to regulation that is meant to benefit the consumer directly. A safety inspection on the meat that I purchase directly benefits me; I don’t benefit third parties by assuring the safety of the food that I eat. There is no “pollution” effect on most of these kinds of safety/quality regulations. 

The above argument also applies to various labor regulations. If the cost of a safety practice (or safety equipment, or a comfortable amenity like climate control) is less than the value the worker places on it, then the employer will buy it for his workers. If your workers each value the additional safety at $100 per week and it only costs you $50 per worker-week to provide the safety, you would have to pay your workers an additional $100 to endure risks that you could mitigate for $50. Even if there is "market power" or something else going on here, there is clearly a deal to make and a surplus to share. (Here is a good treatment at Econlib.)

The point isn't to say that "no regulation is good" or "all regulation is pointless." That is close to my own position, but I won't try to sell that here. The point is rather that you should think more critically about some proposed piece of at-first-blush reasonable regulation. What is the cost? Why aren't consumers already demanding it ("demanding" implicitly meaning "being willing to pay the production costs")? Is there an obvious externality or market failure here? Is there really asymmetric information (in which a sophisticated company bamboozles low-information customers), or are customers actually indifferent to the information and thus ignore readily-obtainable information? Think about it. You might not get all the way to "There should be no government regulation at all." But plainly the stereotype of greedy corporations looking to slash safety and quality costs to earn a quick buck is a mistake. Slashing quality costs a corporation more than the cost savings if you think for a moment about the customer's willingness to pay. (I'm assuming here that actual fraud, selling something other than what you say you are selling, is illegal under any regulatory framework. Just as theft and murder are illegal under any regulatory framework. There are common law doctrines that exist basically everywhere and aren't a function of the regulatory state.)

Wednesday, January 25, 2017

Reacting to "Outrage Porn"

Emotionally reacting to individual news stories is kind of silly. Some news is useful. But the latest sob story about some horrific crime is not useful and it’s barely even information.

I like the term “outrage porn”, typically in reference to these outrageous news stories about individual crime cases (or sometimes outrageous non-crimes). I imagine this dialogue:

Person 1: Did you realize that a really horrible thing happened this week?
Person 2: Well, I guess I assumed that *something* horrible happened this week, considering that horrible things happen every single day. I don’t see why knowing about a specific instance would meaningfully add to my knowledge.
Person 1: Yeah, but did you see…THIS! (Horrible, outrageous headline.)
Person 2: WHAT! This is an outrage!
Person 1 + Person 2: Rabble rabble rabble!

Sometimes the news meaningfully informs you about your world. As far as I can tell these outrage stories do not. I think some people string together collections of these news stories, each one alone being merely an anecdote, and fool themselves into thinking they are incisively analyzing a real social trend. No, you need some statistics to do that, along with the competence to interpret those statistics. Take off the "moral outrage" hat for a moment and put on the "skeptical analyst" hat. It's rewarding in its own right, and it will help you communicate more meaningfully with people who hold opposing worldviews.

Declining to “Respect Your Opinion”

When is it reasonable to decline to “respect” somebody else’s opinion? I’ve done this before. I have two specific examples in mind, and in both cases I could not bring myself to say or type the words, even though I felt like some sort of olive branch was being extended to me. This is true even though I respect most of the people I know as people. You can be a good person, a loving parent, a hard worker, and I deeply respect those qualities in my friends and acquaintances. I just might not particularly respect their political commentary on topics they might be ignorant or dogmatic about.

I don’t think I’m being completely unreasonable or prickly here. If someone has at least a plausible argument in their defense, I usually respect their opinion. But there is a bar to clear. If someone adopts a position that I find morally repugnant, I don’t respect their opinion (unless it is particularly well argued or somehow interesting, and even then it is the approach and not the conclusion that I respect). If someone utterly fails to bring any argument to bear on the discussion, I probably don’t respect their opinion. If someone just asserts “The earth is the closest planet to the sun” and has no response when presented with contrary information, I feel no obligation to respect that. Some political arguments are literally this terrible. Likewise, if the arguments presented are just bad, and the presenter of these arguments refuses to budge after their total dissection, I don’t feel obligated to respect their opinion. If any of this sounds really snobbish, try imagining yourself saying “I respect your opinion” to a neo Nazi, or an intransigent child making a factually incorrect claim, or a person adamantly claiming “2+2=5, because my hair is a bird.” Everyone has a bar to clear. In fact, depending on how repugnant or how poorly argued your interlocutor’s position is, you may well say, “I don’t respect your opinion, and I don’t particularly respect you either.” 

Sunday, January 22, 2017

Learning the Front Handspring

I thought I’d do a post about a physical skill I’ve been learning: the front handspring. I always wanted to learn those cool gymnastic moves, and now at the age of 35 I’m finally doing it. Sort of. It’s been a long, exhausting, frustrating journey but I finally have a serviceable front handspring.

This won’t be a tutorial; there are plenty of good ones out there. Watch a few Youtube tutorials (here is my favorite, but I've watched many others) and read this Crossfit document and this series of three blog post by a gymnastics coach. I had to watch every tutorial I could find about fifty times and re-read the blog posts and Crossfit document about a dozen times to catch all the mistakes I was making. So have your library of tutorial videos and reading materials ready to go.

This post will be more of a motivational and trouble-shooting post, perhaps filling in some holes or emphasizing different problems than the tutorial videos.

First of all, get a yoga ball and maybe some gym mats. If you have a very soft carpeted floor (what I started on), that might work, but it’ll be murder on your knees, hips, and back to do handsprings repeatedly on a hard surface. You can also try it in the grass outside. Position your yoga ball somewhere on the floor, take a hurdle step toward the yoga ball, kick up into the tallest handstand you can manage with your hands placed just in front of the ball, and fall forward over the yoga ball. The ball will serve as your spotter and bounce you up onto your feet. Trust me, you won’t get bounced off in some odd direction. You’ll bounce up in pretty much the direction you were headed. Take a deep breath and try. Then try a few more. You’ll get used to going over yourself, which is an important confidence builder. You'll probably start off flopping onto the ball at the small of your back and bouncing off with most of your weight. This means your arms aren't straight enough. As you progress, your arms will straighten until you're barely grazing the ball on the way over, probably grazing with your shoulder instead of your hips. 

Now record yourself doing a few. Review the tape, then review the tutorials on Youtube. On an iPhone you can drag yourself through the video at any speed you like by sliding your finger at whatever speed and freeze on the frames you need to. Notice everything you’re doing wrong. You’re probably bending your arms too much when you’re upside down with your weight on your hands. This is wrong; your arms are supposed to be straight. You’re probably reaching down to the ground with your arms. This is wrong; your back leg is supposed to kick up and drive your hands to the ground. Your arms and kicking leg are supposed to be in a straight line, as if you were one of those drinking bird toys dipping down for a drink. Your forward leg is supposed to start bent, then spring to give you some lift just as your hands touch the floor. Think about this as your lunge leg is helping to spring you up into a really tall handstand, it’s just that you’re going to go over yourself rather than staying up. You’re probably tucking forward, trying to look for the ground in front of you. This is wrong. You’re supposed to curl your legs under you and arch your back powerfully, like doing a back-bridge in midair. Lie on the ground for a moment. Try tucking your chin forward and doing a back bridge, then try bridging with your head flexed back. It’s a lot harder to arch your back with your chin tucked forward, but in order to land properly you need this arch in your spine. When you go over, you should be looking backwards and down at your hands, not trying to look forward to spot your landing. Then, there’s the “block” with the hands, where you push yourself up off the floor. Make sure you’re doing this at the right time.

I went through this about a thousand times, looking at my terrible front handsprings in slow-motion on my iPhone video and trying to spot my errors. I noticed that I was bending my left arm while I was upside down in a handstand. To correct this error, I had to make sure I was jumping off my lunge leg, which gives you upward momentum. The force of my back-kicking leg was driving me down, so I was forced to bend my arm to absorb the impact. I was indeed tucking my chin on the way up, rather than looking back so that my back could properly arch. As a result, I was landing very hard on my heels with my knees bent and my weight far behind my feet, rather than landing more softly on the balls of my feet. Doing it the wrong way is murder on your joints, so if you have a problem here fix it fast. I repeatedly missed one or another of these steps and had to record myself to see what I was missing. I repeatedly drilled the one part of the movement that was missing, and then promptly forgot another piece. It’s frustrating as hell, because I could do them fine one day then forget how to do it the next. After several days of messing up badly, I went back to doing them over the yoga ball, which would briefly fix my problem before some other problem arose. But I think I’m finally at a point where I’m reasonably proficient in this move. It took about three months of practice, but I got there.

If possible, get a good coach and schedule a few practice sessions under their tutelage. They’ll be able to correct things that you don’t see yourself doing, suggest useful drills that address your deficiencies, and tell you if you’re doing the move safely or not. I really wish I had had a few sessions with a good gymnastics coach, but it was fun trying to learn it on my own. 

Private Drug Prohibition is Adequate. Government Enforced Prohibition Unnecessary.

 I’m not opposed to all forms of drug prohibition. There are some forms that I like and that I believe to be necessary. But in those situations where prohibition is called for, it will virtually always be enforced privately. I’m not talking about some shadowy private security force doing armed SWAT raids on civilian residences looking for drug stashes (think Black Water, the private military company, or Lone Star Security from the game Shadowrun). I’m talking about boring everyday life making drug use an impossibility for most normal people.

Most people spend around half of their day in a restrictive institution of some sort. Most adults go to work. Most children go to school. Even outside of these environments, you may go to a store to do some shopping or go to a restaurant to eat. Many of these environments make drug-induced intoxication unfeasible. You could get drunk and go to work, but you’d most likely be found out and severely punished or, if it’s a repeat offense, almost surely fired. Likewise, if you go to a grocery store rambling incoherently and bothering the other customers, you will most likely get ushered out. Work and school are going to be far less forgiving than the restaurant or the store, but virtually everywhere you go there will be something constraining you from full-on drug intoxication. Even if we assume that workplaces and schools aren’t very good at spotting and punishing drug use, there are intrinsic penalties for these behaviors. Your work will suffer until you eventually get fired or kicked out of school, or at the very least you will not prosper as much as you otherwise could. Even if your boss is oblivious to your high-functioning alcoholism or your methamphetamine habit, if your work suffers your boss will eventually notice that.

Supposing your drug use doesn’t affect your work, there’s no cause for concern anyway as your habit isn’t causing a problem. If someone is very privately using intoxicants in a way that doesn’t affect their work or school, it’s hardly any of their concern. In this sense it’s unreasonable to do drug testing; your boss or school can observe your work product directly and decide whether it’s adequate. With the caveat that some drug use might have no observable effect on day-to-day work output but slightly raise the risk of a catastrophe (think an airline pilot using cocaine), directly observing someone’s work is the sensible solution. Testing for something that isn’t causing an obvious problem is a waste of time.

I can feel your objection welling up. “But…people do drugs anyway and ruin their lives. Clearly private deterrence is inadequate, right?” And to that I say, You’re missing the point. Yes, some people feel the bad consequences of their destructive drug habits. That is how incentives work. The people who behave irresponsibly are hurt by their irresponsibility. Observe that most people don’t have self-destructive drug habits. They see ahead of time that there is an enforcement mechanism in place and decide not to, say, smoke weed before reporting to work, or take heroin before sitting though a class lecture. Perhaps the negative consequences are so obvious that the idea of starting a drug habit doesn’t even occur to most people. It’s not a conscious decision not to smoke a bowl in the morning; the need to get your kids to school and get yourself to work rules that completely out of the question. For most people. And the guy who does try it goes to work smelling of their obvious habit, unable to focus, and gets punished or fired.


Perhaps I haven’t answered the objection fully. The argument is that government enforced prohibition enhances the already-existing (if sometimes tacit) private enforcement of drug prohibition. Does it make sense to go after those remaining screw-ups for whom the implicit penalties for drug use are inadequate? Not really.  Not at all .  It simply does not make sense to try to penalize people out of harming themselves, particularly when the demand for the drug is inelastic (as it must be for any self-destructive drug habit). The intuition here is that I have to harm you more than the drug harms you to get you to stop. Even admitting that a few potential drug users are deterred, the harm to the remaining users is so exacerbated that it doesn’t justify the costs. This is an outcome of very straightforward economic reasoning, requiring no actual grounding in economics or assumption of weird econ theorems or anything. A bit of logic alone will get you to this answer. Unless you posit some very strange, stilted assumption about how drug users respond to legal penalties much more strongly than implicit penalties (of comparable magnitude), you basically have to accept this conclusion: drug prohibition does more harm than good. It’s completely unnecessary. But such deterrence as everyday life naturally provides keeps most of us clean most of the time.

If you’re going to bring up harm to third parties (intoxicated motorists, neglected children, alienated friends, etc.), I answer that point here. In short, the externalities are 1) grossly exaggerated and 2) they are already internalized to the extent that the drug-induced misbehaviors are already illegal. I wish the prohibitionists would take their own arguments a little more seriously. My impression, having thought this through, is that they don’t really have a leg to stand on but they don’t have the patience to think the arguments through to their conclusions. 

Saturday, January 21, 2017

No Moral Trump Cards in Health Policy

Apparently some people think you can answer questions about health policy with their moral outrage alone. At least, that’s the impression I get from my Facebook feed. I see arguments every single day of the variety, “It’s just the right thing to do…” or “It’s just wrong to let someone go without healthcare” or (and this one is a paraphrase from memory, but it may be the exact language) “It’s called being a fucking decent human being!” These are bad arguments. They aren’t even arguments, really. These are simply assertions by the speaker (or writer or meme-sharer) that they hold the moral high ground and will make pronouncements thusly. (At the moment I’m talking about the potential repeal of the ACA, but I could be talking about any public flare-up over health policy.)

I’m sorry, but you simply cannot draw conclusions about health policy without being a little bit analytical. How much would it cost to save one quality-adjusted life-year (QALY, a standard unit of relative health)? Is this close to what most people objectively value a year of life? Is it an order of magnitude higher, thus rendering a strong verdict against the health policy in question? Is it an order of magnitude smaller, thus rendering a strong verdict in favor of said health policy? Based on our best evidence, might the health effects be *negative*, thus making the whole policy morally dubious? How are other values, like freedom of choice and property rights, to be weighted against a policy that forces you to purchase something? If the cost per life saved came out to something absurd, like a trillion dollars, would you still favor it (as a “moral trump card” argument would commit you to doing)? Is there massive uncertainty in even our best estimates of the benefits, thus making it impossible to justify the enormous expense of the law? To play a moral trump card is to dodge all these important questions.

People often use the following hypothetical: “If we can save the life of an uninsured man with no resources for a million dollars, aren’t we obligated as a society to do so?” This is the equivalent of making about 20 median households work an entire year for the sake of one person. (20 households x $50,000/year income per median household.) You can try to make this sound noble, but to my ear it sounds like we’re enslaving 20 households for a year for the benefit of a single person. I’m not trying to support any particular answer to the question posed by the hypothetical (mine is “No”). I’m simply pointing out that there is a trade-off of rights here, and it isn’t clear a priori whose rights should dominate. I think even the moral trumpists on my Facebook feed would balk at the idea of spending a billion dollars to save a life, or the idea of *literally* enslaving 20 people, conscripting them to work exclusively on the task of saving one man’s life. In fact I think they’d balk at the idea of enslaving even *one* person for significantly shorter than a year, even if they objectively valued the man’s life higher than the conscript’s freedom. If a slight re-framing of the hypothetical causes you to answer differently, then it’s no more than a superficial rhetorical trick.

You can continue to grandstand on some all-trumping moral principle, but if you're playing that game your opponents can simply grandstand on some other moral principle that you are neglecting (freedom of choice, property rights, etc.) and they would be on equal footing. There is no framework for discussing anything in this kind of exchange. However, if you engage a little bit with the concept of trade-offs, if you are a little bit empirical about your assessment of government policy, if you employ a little bit of numeracy and mathematical thinking to compare the relative values in conflict, you can have a reasonable conversation about the topic under contention. 

Friday, January 20, 2017

Here, Let Me Think That Through For You

You know how sometimes people ask a silly question that’s easily checkable with a Google search? Or make a factual claim that is easily correctable with such a search? “Here, let me Google that for you” is a wonderful snarky response to this behavior. Sometimes it’s a technologically incompetent person who really doesn’t know how to find the information they want, and sometimes it’s a technology native who just forgets that he has all the world’s knowledge at his fingertips. Either way, it’s fun to remind people that there’s an easy solution to their problem.

A different version of this problem is the following. Someone blurts out whatever arguments come to mind without really thinking about them, usually done in rapid-fire succession. It may be someone who has very poor impulse control and no filter. Or it might be someone who is losing an argument dead-to-rights and is flailing in panic. Whatever the reason, some people are prone to producing this kind of blather. These interactions can be interesting, as this sort of free-association might by luck hit upon a nugget of truth. And a thoughtful person can benefit from thinking through someone else’s bad ad hoc arguments and dissecting them. It’s like a game of “’Why?’ Boy”, in which a child keeps asking “Why?” to each response in the iteration and you have go deep down to the foundations of your knowledge. You learn something from this process, even if the ‘Why?’ boy doesn’t.

Mostly, though, this is a huge distracting waste of time. I wish that people would think through their arguments a little more clearly before sharing. Don’t tax the patience of your correspondent with half-baked ideas. I realize there is some ambiguity as to whether this is happening or not. (After all, isn’t there another person on the other side of the argument who thinks you’re doing the same thing?) But there are some tell-tale clues. If you present someone with novel, relevant information that they haven’t digested and they have an immediate response, they are blathering. If someone makes a claim of a statistical nature but can’t offer any numbers (or even a hint of numeracy), they are blathering. If someone hasn’t thought through your argument but blurts out the first objection that pops to mind, they are blathering. Beware of people who are too quick on the draw with the “post” button.

Examples help illustrate a point, so I’ll offer some recent ones. The purpose of this isn't to revisit the argument or embarrass anyone, just to focus on something specific. The other day I got several very absurd responses from someone who clearly wasn’t thinking. It was a discussion of health policy and how to evaluate whether it’s working or not. This person asserted several things that made no sense. His claim was that it would take a generation to evaluate the effectiveness of a health law. On the contrary, if you’re looking for the effects of a recent policy change, you look *closer* to the time of the policy change. A longer timeline allows more confounding factors to dominate the trend. Another bizarre claim was that it’s hard to evaluate policy that’s applied to millions of people. True enough, but the *size of the sample* is not the issue. A larger sample makes small effects *easier* to see, not harder. I said exactly this, and my interlocutor disagreed directly with this statement. No, it’s difficult to analyze policy because of *confounding factors*, i.e. all that other stuff that’s going on at the same time. A true ceteris paribus comparison isn’t possible. Perhaps this person actually had this in mind and did a bad job of explaining his argument, but then this would have been contrary to his claim that we’d have a better estimate of the benefits if we waited a generation to see the results (a generation being a long enough timeline for a lot of confounders to build up). This person seemed to have gotten some basic principles of statistics and social science backwards. (A larger sample size is better, and an effect that is *closer* in time to the policy change is easier to attribute to the policy.)

I presented some information about randomized controlled trials in which lots of people get free healthcare while a control group doesn’t, and the “free medicine” group doesn’t appear to get any healthier. (See the RAND healthcare experiment and the Oregon Medicaid experiment if you want to know more about this.) This evidence bears directly on the policy question. What’s the sense in giving away a lot of extra medicine if there isn’t an appreciable health benefit? I don’t know if he was familiar with this body of scholarship. He didn’t say. He simply asserted that the health effects would be real but too small to measure.  I cannot prove but strongly suspect that this was a case of someone rejecting out of hand *extremely* relevant information for the item under discussion.

Now, “The health effects are real but too small to measure” is a statistical claim. It’s not just something you can blurt out. The exact size of “Too small to measure” can be determined using statistics. (Actuaries like myself use something called "credibility theory" to determine how much trust to put into a dataset, depending on the sample size and strength of signal vs. noise.) It’s possible to calculate how small an effect would be “too small to measure” and perhaps show that it’s absurdly costly to save a single life. Would anyone want a health policy that cost, for example, a trillion dollars per life-year saved? A million dollars even? I wouldn’t, and most people don’t objectively value their lives that much, as determined by willingness to purchase safety features or take dangerous jobs (to list two ways that economist try to "value" a human life). Something that should have been an explicit calculation was instead just asserted. I could have just as easily asserted that the health law had *negative* health effects that were too small to measure, and we’d have been on equal footing. As it happens, I *did* open up an Excel workbook with some mortality-by-age data in it. Very crudely, I convinced myself that you’d need to see a few percentage points change in mortality for very young people, about a 1% change in mortality for 50-year-olds, and less than a 1% change in mortality for 60+ year-olds, to see a statistically significant mortality effect. That should be a starting point for anyone offering a “the benefits are too small to measure” kind of argument. My “analysis” was totally back-of-the-envelope and is probably wrong for various reasons; someone else might get another answer for what “too small to measure” means. My point is that you have to do some work, you have to use figures (real or at least made-up-but-plausible), and you have to use some math when you’re making a statistical argument. It’s inconsiderate to just assert something and make someone else do the legwork for you. “Here, let me think that through for you.”

If my interlocutor is reading this, my apologies for using you as a foil, but this isn’t about you. I’m not trying to be gossipy. The point I’m making has nothing to do with any one person or any particular argument. It just helps to have an example at hand, and this one serves nicely. I have probably wasted way too many hours of my life thinking through bad arguments left on my FB page. A healthier habit might be to simply ignore distracting blather and only communicate with people who state their arguments clearly. It bothers me to think that a bad argument can just be allowed to stand. Someone might see it and think it's superficially plausible. Unfortunately this puts me in a position of sometimes arguing with thoughtless people and reasoning through their arguments for them, and I always feel a little bit cheated when I get sucked into one of these discussions. As a clever friend of mine put it, the thoughtless commenter has a huge cost advantage over the thoughtful one. 

Tuesday, January 17, 2017

Healthcare Policy Changing. Falsifiable Predictions Wanted.

When the ACA first passed, I remember posing the following challenge: “Ok, ACA supporters, you got your law. Now what’s going to happen? How will the benefits manifest themselves? Make a prediction that we can check. In a few years you should recant if you’re way off. I may take a bet against your prediction, if it’s well-specified enough.”

It would have been interesting to hear people’s actual responses to my question, assuming anyone had thought about it. I was fishing for something like, “Mortality rates will fall for poor minorities by X deaths per 100k population,” or “Diabetes rates will decline by Y%, high blood pressure by Z%, among the currently uninsured population.” The point here is that a healthcare law should somehow affect health, and if it doesn’t noticeably do so it has failed as public policy. It’s fake medicine. The people who were singing the praises of Obamacare should have been willing to make some kind of falsifiable prediction the moment it passed. I took their unwillingness to do so to imply a lack of serious thinking. I never got a meaningful response, nor did I see any prominent pundits or bloggers independently come up with the same idea.

(By the way, it’s not enough for someone to point out that an improving trend has been at work since the law was passed. It could be a pre-existing trend that can’t possibly be attributed to the law. In fact, life expectancy has risen since 2000 so you have to be really careful about attributing any improvements in health to a bill passed in 2010. One has to show that any pre-existing trend has improved above and beyond the trend-line.)

One might have answered something like, “The world is really complicated, and a lot of things affect the overall health of any population. I’m unwilling to make such a prediction.” This would have been an interesting admission. If the major supporters of the ACA thought that the benefits are too small to measure, or so small they would be swamped by noise, they should have at least said so. Some of us think it's unwise to waste massive resources in pursuit of benefits that are speculative or invisible. 

One might have given another sort of answer, something like, “The law would work if implemented, but stupid Republican states will fail to implement it and stupid Republicans in a future Congress will hamstring the law.” I can imagine someone being tempted to issue this hedge, but again this is something that needs to be stated ahead of time. The person giving this kind of answer is predicting failure, and failure based on something endogenous to the system. If the failure of public policy is that predictable, we should oppose such policy. Your political initiative might have worked if it weren’t for that incorrigible opposition party, just as your lunar program might have worked were it not for stupid gravity!

Today it looks like some kind of repeal is imminent, and doomsayers are predicting some kind of blood-bath as people lose their insurance coverage. I seriously doubt it, but I am willing to hear from anyone capable of discussing this at room temperature. What *precisely* do you think will happen?

My own view is that health policy doesn’t matter all that much in terms of getting actual health outcomes, although bad policy can certainly be very expensive and saddle us with enormous burdens. The social science is pretty clear on this point. An individual’s “insured status” has little correlation with health after you’ve made the appropriate demographic adjustments; it simply isn’t true that insuring someone grants them “access to healthcare” in a way that makes them healthier. (Or more precisely, a population of such people won’t get healthier; any one such someone might get healthier or sicker. But it’s the population effect, not some individual’s health outcome, that tells you something about causation.) I have a long reading list for anyone who doubts this. Start with Cut Medicine In Half by Robin Hanson, and do go on to read the entire discussion. (Cato Unbound, where the essay was hosted, is a forum. In this particular one there are three other health policy experts who, while they never actually say Hanson is wrong, take issue with his claims.) Read In Excellent Health by Scott Atlas; here is a podcast of him discussing the book on Econtalk. Also check out Overtreated by Shannon Brownlee, Catastrophic Care by David Goldhill, Crisis of Abundance by Arnold Kling, Affordable Excellence: The Singapore Healthcare Story by William A Haseltine, Priceless by John Goodman, and I’m sure I’m forgetting a few others. Also listen to any episode of Russ Robert’s Econtalk that talks about healthcare, especially anything with Arnold Kling or Robin Hanson. Most people balk at the claim that people don’t get healthier when you give them a bunch of free medicine, but I’m on very solid ground here. Go review the literature a little if you're skeptical. I'll wait. Given all this, the obsessive fixation on getting healthcare to “the poor” is misguided. This is mostly not a fight about access to healthcare, but rather about who pays for what and how. 

My own prediction would be something like: Policy tweaks of the "pass/repeal the ACA" kind won't noticeably affect health outcomes (although much larger restructurings might). Policy tweaks that cause patients to face a larger proportion of the bill would result in cost savings with no measurable effect on health outcomes. I could try to be more specific if a specific proposal is on the table. But the people who are predicting disaster from the repeal of a seven-year-old healthcare bill look positively daffy. In terms of evidence-based policy, they don't have a leg to stand on. 

Saturday, January 14, 2017

Pet Peeves of a Data Analyst

[Edit: I wrote this a few days ago then reread just before sharing. It comes off as more sneering and angry than I would like, though I was mostly in a good mood when I wrote it. I'm sharing anyway, recognizing that it could probably be written in a way that's more understanding to the peeve-inducers. I tried, when I could, to be constructive rather than abusive.]

I work with large datasets for a living, doing all manner of statistical analysis. My work ranges from simple summary statistics to model building and feature engineering on big datasets. I thought I’d list a few pet peeves of mine.

1)      Fake Precision on Noisy Data. Someone always pipes in with “Did you adjust/control for X? (Which happens to be a hobby horse of mine?)” Sometimes this is welcome, but sometimes the dataset is too small and therefore not credible enough for refined adjustments to matter. Sometimes the “analysis” is basically fitting a line through some random dots. If you had infinite data, the dots would *not* be random, but would be a noticeable pattern of some sort (linear or otherwise). But you have to work with the data you actually have. If your data are too sparse to see the real pattern, any “adjustment” you make, even if there’s some good theoretical reason for making it, isn’t going to matter. It just means you’re fitting a line though *this* set of random dots rather than *that* set. I’ve also seen maddening arguments about whether some set of points should be fitted with a line or a curve or some other sort of grouping, when in reality the data points are too noisy to make any such determination.

2)      Last Minute Demands on a Big Dataset. It’s often said that data modeling is 90% data gathering/cleaning and 10% model building. So it’s a huge headache when someone has a bright idea for a last minute insertion. Sometimes this is the fault of the modelers, but usually it’s wishy washy management deciding at the very last second that something they just thought of just now is very, very important. “Wait, will we be including social media history in our analysis of auto accident frequency? I didn’t see it in the list of variables. Let’s add it!” The data modeling people sigh at these kinds of requests, because it usually means a few days of additional data gathering and a delay in a (perhaps already determined) modeling schedule. Data modelers should keep lines of communication open and set some kind of “no further adjustments” date so that this doesn’t happen. But it probably will anyway.

3)      “Well, I did this at my former company…” Sometimes a person will move from a large company with millions of customers and a huge dataset to a small or medium-sized company. They often bring with them unrealistic expectations. With larger datasets it is easier to see very weak patterns that might be invisible on a smaller dataset. The pattern may be real, but you won’t see it in your data. At my company, you have to assemble several years' worth of customer data, in countrywide aggregates, to see real patterns. A much larger company might have 20% of the adult population as its customers, and can thus see real patterns in a single quarter’s worth of data in a single state or region. If a manager gets used to the latter, they will have unreasonable expectation if they moved to work at a smaller company. Sometimes this might be a matter not of data size but of expertise. If your previous company had three data science PhDs, a team of actuaries, and a bunch of SQL experts, but your new company has a bunch of Excel jockeys who barely know how to run an Excel regression, you won’t be able to implement all your awesome ideas at the new company. At the very least, it may take some time to develop the appropriate skill set.

4)      Ignoring the modeling results and expecting there to be no consequences. Data models sometimes give us surprising results. That’s why we do them. If we knew all the patterns ahead of time, there would be no need for the modeling. But I sometimes get requests such as: “I’m going to ignore a piece of your modeling results. Re-run your model so that I’m still right.” For example, a model might tell you that people above age 60 are at increased risk of an auto accident, but a manager objects that this is their target market. They may ask you to re-run the model without including age as a predictive variable, hoping that the new model will offset their (possibly unwise) business decision. (This is a completely made up example that is vaguely similar to something that might actually happen.) There may be some rare instances where this is appropriate, but managers need to understand when they are throwing away predictive power with their business decisions. If you throw a predictive variable out of your model, you are throwing away predictive power and you can’t get it back. Worse, the model will try to find the influence of that variable elsewhere, so you may end up with a model that is altogether weaker. To take an example, suppose I have a multivariable regression including driver age and other variables on auto accident frequency; if I throw driver age out of my model, the other variables that correlate with driver age will adjust to try to pick up the lost signal. It’s probably better to keep driver age in the model and consider it a “control variable” that won’t count against the driver (because you won't charge the customer based on their age or something). At any rate, it’s delusional to think that you can throw away predictive information and somehow totally offset this decision.

5)      Hobby horsing (again). As in, “Hey, I did X once and it was important and really made a big difference. Did you do X?” This overlaps with 3), obviously. Just because your brilliant insight saved the day once doesn’t mean it’s going to matter every time. It’s extremely annoying when people try to shoehorn their awesome idea into every single project.

6)      Finding a bullshit reason to discredit something. Often someone will object to an analysis the conclusion of which they dislike. A much more toxic dynamic is when someone dislikes a specific *person* and looks for stupid reasons to discredit their work. This is incredibly demoralizing to the data people and managers need to be very aware of when they are doing it. To guard against this, a data modeler should anticipate such objections and be ready to answer them, even going so far as to prepare for a specific person known to be an incredulous hard-ass.

7)      Someone gets mad at you for finding something inconvenient. I understand when someone responds incredulously to incredible results. Sometimes the data modeler really did goof. But sometimes the incredulity persists after all the objections are answered. “Yes, I adjusted for this. Yes, I controlled for that. Yes, I filtered for those.” After all this, perhaps you *still* conclude that your target market doesn’t deserve that huge discount you’ve been giving them, or your giant marketing initiative didn’t work. Accept the results and move on. I once had a boss who simply could not accept inconvenient results. He would come up with bullshit adjustments or filters or something hoping to get the results he wanted. It felt like I was being punished for bringing him bad news. Don’t be that guy.

8)      The regulatory state. I work in the insurance business as a research actuary. I am often in charge of crafting responses to regulators, sometimes filling out standard filing forms. Every single state (except Wyoming) requires that you file your rate plan with the state department of insurance (DOI), and every state DOI reserves the right to object to any filing. There is often a painful back-and-forth where regulators ask annoying questions and the insurance company tries to answer them. Sometimes the objections are based on the violation of a specific statute, and sometimes the reasons for objecting are far more capricious. They often have no statutory authority for their objections, or authority that comes from a lame catch-all (such as a vague law saying that rates must be “actuarially sound”, “not unfairly discriminatory”, etc.). This is a major pain for data modelers. I’m not making a libertarian stand here; if the state outlaws race-based discrimination and asks for reasonable proof that a model is not engaging in any such discrimination, I don’t object to that. My major gripe is that many of these departments are decades behind the latest modeling methods. Their standardized form questions often betray their ignorance.* A standard filing form from one state (see footnote below for additional detail) was littered with questions that looked like they were copied from a standard textbook on traditional linear models (very old school) but which are irrelevant to generalized linear models (glms, very commonly used in my industry). It’s like the actuaries at that department went to a predictive modeling seminar for a day or two and came back thinking they were experts on the topic. Then they copied some wording from the session handouts and turned it into an official state document. The non-standard questions we get from state DOIs are no better. Almost every filing is met with an “objection letter,” in which a DOI employee asks questions specific to the filing. One such question was (and I swear to you I am paraphrasing only slightly here): “What is a multivariate model?” Anyone remotely knowledgeable would have known that the term "multivariate" was a reference to glms, which almost every insurance company uses. These non-practitioners (I nearly said non-experts, but that would be a woeful understatement) have pathetically little knowledge of what they are actually regulating. I suspect this is the same in other industries. The latest, most cutting-edge methods must be justified, but must comply with decades-old language written for a different purpose. Such cutting edge tech must be explained to laymen who have the final decision. Once again I’m not here to critique government regulation in general. I just think it’s not too much to ask that government employees understand the thing they are regulating. And if a government agency cannot afford to keep such expertise on retainer, they need to relax or repeal those regulations.  This single factor is a huge barrier to innovation. If we can’t use a model unless it can be explained to an ignorant non-practitioner, that severely restricts what we can do. Another problem is that regulators sometimes demand a very specific statistical test when in practice something might be more of a judgment call. Data modeling is a process that requires a great deal of human judgment. What variables should I include? How should I group things? What kind of curve should I fit? Regulators often demand an unreasonable sort of rigor in the model-building process where all of this judgment is stripped away and replaced with an unalterable decision tree. Such a process will often lead to models with nonsensical results that any reasonable person can spot and fix, but the regulators interpret any insertion of human judgment as an attempt to be devious.


This list is by no means exhaustive. Obviously it could be longer, or shorter, but these were the peeves that occurred to me on one evening. 

* A questionnaire on generalized linear models (glms) in one state ask about tests for “homoscedasticity”, which means that the variance does not vary with the expected mean. (On a residuals plot, your residuals will spread out more on one side of the graph, rather than having a roughly constant standard deviation across the range.) This is a topic in traditional linear models, but the power of a glm is that you can relax the “constant variance” assumption. (You can make your error term gamma, poisson, tweedie, etc, rather than the traditional Gaussian that results in a constant expected variance.) Apologies if these technical details are confusing to the uninitiated, but it is really basic stuff as far as glms go. And every company is using these now.

Libertarians and Social Justice: A Sticking Point

Libertarians and the social justice left quite often come to the same policy conclusions. So why are they so often at each other’s throats?*

To take an example, most libertarians and most people who identify with the social justice left believe in gay marriage. If we were transported back to an era where interracial marriage was an issue, you’d see the same sort of agreement on policy. I think the problem is that both tribes arrive at pro-gay-marriage position from different kinds of arguments. Libertarians believe in equality under the law and an untrammeled right to free association. “Gay Marriage” isn’t a specific item; it’s a subset of a larger right of adults to freely associate with one another. The social justice left arrives at their pro-gay-marriage position through a different sort of argument, one that certainly doesn’t embrace a general right of free association. I'd probably fail to exactly articulate this line of reasoning, but it surely has something to do with certain classes of victims being historically oppressed and deserving a special consideration to counterbalance this history. Nevertheless, we have agreement on a large collection of policies, so there *should* be a viable coalition here.

I think this is the rub. Libertarians want the social justice folks to explain what principle underlies "the right to gay marriage," because such a principle entail many other rights, some of which the left is hostile to. Social justice folks bristle at this, because the struggle for gay marriage is an important fight by an oppressed minority. They might react with, "How dare you bring up something so trite as property rights? How dare you compare the right of a couple to marry to the right of a business owner to discriminate against unwanted patrons?" Both tribes are fishing for some sign that the other can be trusted. If only the social justice warrior would affirm a much more general right of free association and free transaction between consenting adults, the libertarian could trust him not to support a bunch of illiberal policies, justified with back-fit ad hoc reasoning. If only the libertarian would signal his unwavering support for this oppressed minority group, the social justice warrior could trust him not to betray the cause when high-minded principles get in the way of the next fight.

To take another example, the left often couches its criminal justice policy positions in terms of disparate impacts to minorities. To a libertarian, an injustice is an injustice regardless of how disparate the impact is. If you tell me a million people are unduly or unlawfully harassed by police, with some fraction of those unjustly arrested and a few dozen beaten or shot, I shouldn't care any more or less when you tell me the races of the victims. To the social justice warrior the root cause is racism or some other kind of bigotry, so they see it as obtuse or evasive when someone fails to clearly proclaim that this is the problem. In this view, fix the bigotry and the bad policies will go away. To the libertarian, things like the war on drugs and stop-and-frisk are unjust in and of themselves. If you got rid of this underlying policy the racial disparity in their application would obviously go away. In this view, it's obtuse to talk about the racial impact of an unjust policing when the policy fix is so obvious, and it's demeaning to imply that a non-minority victim's suffering at the hands of police should count less.

I don't know what it would take to get a functioning political coalition of libertarians and social justice types. If you read my blog, it's fairly obvious I come down on the libertarian side of this split. I don't want to give a self-flattering answer like "The solution is for lefties to become libertarians." Nor do I want to offer a lame split-the-difference compromise. My main purpose in this post is to articulate the reason why an otherwise obvious alliance doesn't form. I don't think it's a fatal disagreement, though, because all political coalitions contain subgroups that distrust or hate each other while sometimes arriving at specific policy agreements. Perhaps my explanation above proves too much. In that case, what's really going on here?

*A mild disclaimer. I don't mean to imply that there aren't any social justice libertarians, nor do I mean to imply that no social justice leftists have libertarian leanings. Clearly there's some overlap between both groups. I'm tying to highlight an area of discord, so forgive me if my discussion above treated two overlapping classes as purer than they really are. 

Wednesday, January 11, 2017

Good Comments

What is a “good” comment anyway?

I wrote a post a while ago listing all the bad commenting habits that I dislike the most. Around the same time, a blogger who I know on Facebook said (writing on FB, not on his blog) that he’s been disappointed with the quality of comments on his blog. I think this is probably a common problem. A select few blogs seem to garner very good comments. Steven Landburg’s blog The Big Questions has the very highest quality comments section of any blog I read. Scott Alexander’s blog Slate Star Codex also has a great comments section. I don’t go to Less Wrong very often, but it’s about as high-quality as Slate Star Codex. Econlog is also near the top of the list, which I believe is due to some thorough (sometimes overzealous) moderating. Marginal Revolution would benefit from more active moderating, but it does manage to attract a lot of good comments (if you can perform the task of filtering out the bad ones yourself). But even in these stand-out comment sections, the average comment is still pretty bad and probably not worth reading, let alone responding to. There are always people veering a little too far off topic, or behaving rudely to the other commenters, or making bad arguments, or carelessly misreading the original post.

So I got to thinking: You’re posting controversial ideas on the internet and inviting anonymous readers to comment? What could possibly go right? I thought it would be helpful to write a companion piece to my Bad Comments list. These are the ways that a comment thread can actually go right.

1)      Correcting a material fact in someone’s argument. Be careful about this one. The correction should be an unambiguous *correction*. If there are dueling studies or conflicting data sources or something, it’s not helpful to say, “You’re wrong, replace your facts with my facts and you’ll be right.” Also, tame the urge to think that facts somehow speak for themselves. All facts require some kind of interpretation in the form of an argument. But if someone is unambiguously mistaken on a point of fact, they might actually appreciate and thank you for the correction. It sounds crazy, but I’ve seen it happen

2)      Filling in some important but neglected background. This is similar to 1), but far more ambiguous. If someone, say, quotes the wrong number for annual gun deaths in the US, it’s trivial to look that up and correct it. It’s slightly more of a judgment call to say, “…but really we’re talking about homicides and accidents, not suicides. Also, the recent trends are relevant here…” Sometimes people get their facts right but the context wrong, so if you can fill in the relevant context that can be useful for the other denizens of the comments section. Judgment is obviously necessary here. Someone might take “filling in context” as a carte blanc to expound upon an alternative worldview for several paragraphs. Don’t be that guy.

3)      Pointing out a logical flaw in someone’s argument. By this I mean literally, pedantically correcting a logical error and nothing more. Like, “Whoops, you misused modus ponens. Your argument appears to be ‘If A then B. B. Therefore A.’” Don’t add any snark or berate the author. These things happen. You do them, too. Just make a quick correction and move on. You might even politely suggest that the conclusion is still true, just for reasons other than the argument offered. Try, when you can, to *directly* address the argument given. Many comments ignore the argument offered while supplying some orthogonal argument for why the conclusion can’t be right. Whenever possible, directly address the argument offered. (Here is a good example of what I'm talking about.)

4)      Identify the locus of your disagreement and fixate on that. I have seen far too many comment threads escalate into full-blown flame wars without anyone even specifying what they were claiming. The very good comment threads are the ones in which the commenters stay focused on a particular point of disagreement.

5)      Praise should be specific. No “F***-yeah!”s or “F***-ing A man!” I don’t think anyone wants a bunch of cheerleaders in their comments section. Excessive fist-pumping and high-fiving in the comments section can deter skeptical commenters, exactly the kind that the blogger needs to hear from. Say specifically what you liked and what you thought was a new or useful idea. Try to give the author good fodder for another post. If possible, suggest ways to make a strong post even stronger. If a post has you thinking "F*** yeah! Preach it!" then you should be actively looking for something wrong with it, because it's possible your emotions have gotten the better of you. 

6)      “You might want to read this…” If you have similar interests, it might be useful to suggest reading materials. Books, articles, movies, etc. Of course a quick article is more likely to actually be read than a book. Your recommendation might be appreciated. The standard is going to be much higher for suggesting literature critical or contrary to someone’s worldview. If I’m an anarcho-capitalist who reads a lot of economic arguments for anarcho-capitalism (David Friedman, Murray Rothbard, Pete Leeson), I may want to know if there’s an obscure author I have missed (Bruce Benson for example). On the other hand, such a person won’t voraciously read works written by socialists hoping to stumble on a rare nugget of truth. In the second case, if you can identify the best argued, best articulated short-ish work for a skeptic to read, your knowledge may be appreciated.

There are so many ways for comments to go wrong and so few ways for them to go right. Basically, if you’re going to leave an approving comment do so without “piling on” or excessive fist-pumping. And if you’re leaving a critical comment stay on topic, state your criticism clearly and succinctly. Don’t be a dick about it, don’t veer off topic, and don’t impugn the motives of the person you are “correcting.” I have wasted countless hours arguing with people who exhibit poor comment hygiene. It hasn’t all been a complete waste and I did learn some important lessons from some of these exchanges. But many of them could have and should have been completely avoided.

Another thought occurred to me while writing this. Most of what happens on the internet doesn’t get captured in the comments section. Most of it happens in our minds without leaving a visible trace. If there is a well-written, thoughtful article, most of its readers will simply absorb its content without commentary, perhaps stowing it away in their heads for future reference, or perhaps forgetting about the post itself while still absorbing its lesson. If we’re talking about a blog post, most of the push-back will be from unthoughtful people who simply can’t swallow the conclusion. Maybe my blogger-friend was mistaken to be disappointed by his comments section. The loudest, most impulsive, least reflective individuals are the ones most likely to fire off a comment, so the comments section is a very biased sample of reactions to any given post. (Ever see a comment start with "long time lurker, first time commenter"? Just realize that most of your readers are forever lurkers, but the quality of their thoughts is probably as high as or higher than that of your average commenter.) If you could somehow sample the very private thoughts of your readers, it just might restore your faith in humanity. 

Friday, January 6, 2017

The Animatronic President Show Everybody Wants to See

I want to own a museum that does a robot presidents show. This already exists, but here’s my addition. Animatronic Lincoln gets up when it’s his turn to speak and says, “Hi, boys and girls! I’m Abraham Lincoln, and I've just won the Civil War!” All the other presidents hit the deck. Lincoln looks behind him over one shoulder, then the other, then shrugs. “Anyway, it looks like it’s clear sailing from here, for me AND the blacks. I think I'll celebrate by taking in a show with the Mrs.”

Your imagination can take it from there. Perhaps an animatronic John Wilkes Booth could make an appearance.

Quote from To Serve and Protect by Bruce Benson

The Justice Department recently announced that it would stop using "private" prisons. Scare quotes because nothing that the government pays for is truly private. It reminded me of the following passage:
Some people argue that only the government should have police powers and the power to punish, since such powers in private hands will tend to be abused. In other words, the government must be the monopoly producer of police services, prosecution, and punishment. The fact is, however, that “the government” never actually produces anything. Everything that the government allegedly produces is actually produced by contracting with private entities working under contract. They are not owned by the government. They contract to provide their services because they expect to be better off than they would be in an alternative job. The benefits of the bureaucratic job may take many forms, including any pleasure received from helping to produce what a bureaucrat perceives to be in the public interest, a good living to support a family and/or an attractive life style, job security, perhaps pleasure from being in a position of power and authority, and so on. An individual police officer, then, is a private citizen who has been given a tremendous amount of power and discretion, and he is in a position to abuse the power that he is given. After all, since he is part of an organization with virtual monopoly power over the right to coerce, there is relatively little to constrain is tendencies to abuse his position, Not surprisingly, many types of abuse (corruption, physical abuse, falsification of evidence, etc.) occur in great numbers…

In light of this discussion, the normative view that government must be the only organization with police and punishment powers, for the fear that private entities might abuse such powers, really does not make that much sense. The fact that the idea of government production is a fiction actually implies that it makes more sense to have competitive options in order to constrain the ability of individuals to abuse power… In the context of the present discussion, however, this implies that contracting out must occur at some level.

This is an excerpt from the 2nd chapter of Bruce Benson's "To Serve and Protect." This is an important point. "Privatization" is not a yes or no question; all government services are ultimately supplied by private individuals. The question remains: should you have the option to deny the police your business if they fail to serve you, or should you be unconditionally subjected to their rules (and be forced to pay for their services even though you don't want them)? Should problem officers be difficult to fire (as public employees invariably are), or should the be easy to fire (as private employees typically are)?

There is nothing special about government provision that allows us to specify the outcome and achieve that outcome with any certainty. This is a point that libertarians like myself understand; perhaps we're bad at communicating it. Public provision and private provision are both subject to uncertainty. Either has the potential to harm the consumer. The real question is which system can be expected to perform better.


Anyway, I was sort of scratching my head at all the celebration over the Justice Department's announcement. It will still be the case that the federal government decides that certain people should be detained in prisons, and it's still the case that the federal government will pay certain other people to detain those prisoners. I don't really buy the framing that this is a categorically different way of doing things. 

Your Ideological Opponents are Idiots, says SCIENCE!

Now we get to a finding that sounds more tendentious than it is; smarter people are more liberal. The statement will make conservatives see red, not just because it seems to impugn their intelligence but because they can legitimately complain that many social scientists (who are overwhelmingly liberal or leftist) use their research to take cheap shots at the right, studying conservatism as if it were a mental defect.

This is from “The Better Angels of Our Nature” by Steven Pinker. He clarifies in the next paragraph that he means *classical* liberalism, which is more like libertarianism than American-style liberalism. (Modern American “liberalism” is often quite illiberal.) I thought this passage was funny because I probably see a nasty article on my Facebook feed once a week or so claiming that “conservatives” (or perhaps some other group singled out for opprobrium) are less intelligent, or more impulsive, or (insert major character flaw here). We proved it! Using SCIENCE! There may be a nugget of truth or some real but small correlation underlying these stories. But I think it shows really bad faith to frame your political/cultural opponents as some sort of pathology that can be uncovered with a brain scan or treated with vitamins or something.


Seriously, stop doing this, people. If you are imputing wicked motives or mental disorders to everyone you disagree with, you probably aren’t thinking very clearly. The person across from you is probably making a fair point, perhaps imperfectly, perhaps inarticulately. But somewhere underlying everything there is a point, and you would profit from understanding it. 

Word Clouds as GOTCHA!s

Word clouds are fun. It’s the most obvious kind of “analysis” you can do with unstructured text data. But I don’t like how they are sometimes used as a “gotcha!” As in “Ha! You use this word way too much!” Or “Ha! By using these words a lot and not using these words as much, we can see that your priorities are all screwed up!” Sheer word-counting is easy enough to do, but it’s not very insightful. 

Wednesday, January 4, 2017

What To Blog About?

I’m running out of interesting topics. A lot of my posts from last year were about the economics of drug prohibition and recent trends in drug poisoning deaths (or “drug poisoning” deaths if, like me, you don’t believe these are all classified correctly). I want to maintain this blog and keep it interesting, but I have a strong impulse to quash any idea if it doesn’t pan out into a highly original post. Thus the lower frequency of blogging recently. I have several options and they have various trade-offs.

  1.  Write a lot more posts on whatever is on my mind that day, even if it’s something I’ve talked about at length before. Perhaps I have ten or so heavily overlapping posts on income inequality, but if some new study or politician’s speech happens to be on the news, I’ll reiterate those points just because it’s bothering me right now.
  2.   Link to other stories and blogs with minimal commentary just because I found something interesting.
  3. Accept that I have nothing new to say and say exactly that.


For 1) and 2), I usually just go to Facebook and fire off a quick share with minimal commentary. So I'm getting the utility of this elsewhere. Also, I risk being extremely repetitive. But maybe that’s necessary if the same old fallacies keep coming up in my news feed. If these are being recycled with some regular frequency, then perhaps a periodic debunking is necessary. Some arguments become “tired” because nobody answers them. With 3) I risk long dry-spells, which risks falling completely out of the habit of blogging and stopping altogether.

I read a lot of books, and often do so with the intention of reviewing some of the books I’m reading. But for some reason I rarely do written reviews of any of the books I read. For one thing, I do a lot of my “reading” on audiobook, so I don’t have the text handy when I want to excerpt something (unless I can find the passage on Amazon’s preview or in Google Books or somewhere else online where someone happened to quote the exact same passage). I’ll try to share a little more of what I’ve been reading without feeling the need to do an in-depth book review.


Any topic requests from my readers? Don’t interpret my asking as a promise to blog about any topic, but if there’s something interesting that’s worth discussing I can at least be thinking about it.