Monday, October 30, 2017

A Modest Sandwich Tax: the Economics of Deterring Self-harm

You are the Czar.  You have the power to unilaterally make policy. You want people to stop eating sub sandwiches. The average cost of a sandwich is $7. You propose to impose a $1 tax on sandwiches, dust off your hands, and say, “Problem solved.” A trusted adviser points out that this is a relatively small price increase, and will probably only cause a small reduction in sandwich consumption. You scowl, cup your chin, and say, “Point taken.” Then you impose a $7 tax on each sandwich. Your trusted adviser points out that this will have a more substantial effect, but at the cost of doing more harm to the remaining sandwich users. Some will continue to use sandwiches just as before, but now they are being squeezed with the tax. It does no good to ramp up the tax. You will have fewer sandwich eaters, but each paying a much higher cost for their sandwich habit. “What,” your adviser asks, “is the problem you’re trying to solve anyway? Quantify it. Are these sandwich-abusers doing self-harm to the tune of $2 per sandwich? $5 per sandwich?” You think it’s big. Comparable to the dollar-price. Say it’s $7-per-sandwich, because of gluten and obesity-related issues. With self-harm being this large, surely that justifies your sandwich tax? Your adviser points out that, no, a larger value here implies that it will be even harder to deter sandwich users. If they are really paying $14 per sandwich already ($7 in cash plus $7 worth of self-harm, appropriately quantified and converted to dollars), then the $7 sandwich tax is only a 50% price increase. And once again, ramping up the tax harms the remaining users even if it “helps” those who quit. You at first think your adviser is playing some having-it-both-ways trick, but then you realize that there really is a fundamental trade-off. A smaller tax will deter fewer sandwich eaters but be less harmful to the remaining sandwich eaters. A bigger tax will deter more sandwich eaters but be more harmful to the remaining sandwich eaters. (A final thought occurs. Maybe the Czar, even in his infinite wisdom, is mistaken about the hazards of gluten and the causes of obesity?)

You now observe a more serious problem. Some people seem to enjoy bonking themselves on the head with a mallet. Not like a fatal blow with a hard metal hammer, but a mild, dizzying shot with a padded weapon. It's part of some kind of game, the appeal of which you fail to understand. You insist on putting a stop to this at once with a severe penalty. Anyone who indulges in self-bonking will be punished with a stiff fine and a brief stay in jail. But your trusted adviser reminds you of your previous conversation about discouraging sandwich eating. “Convert the penalty to mallet-bonks. What would you say it amounts to?” You think about it and say that the penalty amounts to perhaps a single mild mallet-bonk. Your adviser points out that the penalty effectively doubles the price of self-bonking (assuming an implausible 100% chance of detection and conviction). You probably deter about half of them, but the remaining half suffer twice the harm. On average, you haven’t helped this group at all. (It could be much worse than this if the elasticity of demand for mallet-bonking is low; doubling the price reduces bonking by less than 50%. So with >50% of previous users paying double the cost, your penalty has made this population worse off as a whole. If demand elasticity is high, that could make your penalty look better. But it’s kind of implausible to think that people doing serious self-harm have highly elastic demand.) Once again, you suspect some kind of having-it-both-ways casuistry on the part of your adviser. I shouldn't try to deter mild self-harm, and now I shouldn't try to deter serious self-harm? But after a while you convince yourself that the degree of self-harm is irrelevant to your adviser's point. If someone is indulging much more severe forms of self-harm, they are effectively "willing to pay" much more for those habits. Deterring them will require a stiffer penalty. And once again the stiffer penalty doesn't "help." For every person that you deter from self-harm, there is another person whose harm you have doubled. And with inelastic demand it's actually worse than this. More like, for every person that you deter from self-harm, there is another person whose harm you have tripled. 

I explore this argument more generally in a previous post. There isn't a way to square this circle without adding in some weird, stilted assumption. Preventing vice-related self-harm by penalizing vices just doesn't work. Maybe people are far more sensitive to the legal penalty than the other costs (even when these things are of similar magnitude)? Maybe, but why would that be true? Maybe there are costs to third parties? ("Externalities" in the language of economics.) Sure, but there are bound to be externalities due to prohibition as well. Besides, most of the so-called externalities due to vices (particularly drugs and alcohol) are already internalized. A more elegant and direct approach is to target the externalities themselves. For example, issue stiffer penalties for property crimes and intoxicated driving in particular, not for intoxication in general. At any rate, we shouldn't fool ourselves into thinking we're helping people who indulge in dangerous hobbies by compelling (some of) them to quit. If any of this was too glib (and I think this final paragraph has given me away), this argument has serious implications for drug prohibition. A tax on harmful vices wouldn't be quite as bad as a legal penalty (police harassment/arrest/jailing), but in either case the people who indulge those vices are hurt more than they are helped. Perhaps education or mandatory licensing would at least help people figure out if their risk-taking is rational. But once people make up their minds to hurt themselves, it seems pointless and cruel to pile on. 

Minimum Wage Arguments

I recently came across this excellent piece on the minimum wage. Actually, it is more generally a discussion of the so-called bargaining power of big businesses:
Indeed, despite being cited by the CEA report, the previous Furman research, coauthored by Peter Orszag, stands in contrast to the basic monopsony theory. In the previous piece, they argue that lack of competition is giving some firms economic rents, and this is boosting pay at those firms by so much that it significantly contributes to inequality. So, are big firms bargaining down wages for their workers or are they generating massive rents that are boosting the pay of their workers? The reconciliation between these two theories is incomplete at best.

Second, if big firms are bargaining down wages then why do labor economists consistently find a large firm wage premium? To take one example from many, one recent study on retailers found that after controlling for individual and store characteristics, firms with at least 1,000 employees pay 9% to 11% more than those employing 10 or fewer.

Third, if firms’ bargaining power over their employees is growing, then why are they increasingly contracting out for work? Lawrence Katz and Alan Krueger argue that from 2005 to 2015, the share of workers hired out through contract companies grew from 0.6% to 3.1%. A company with labor market power wouldn’t want to contract out work to another company. They’d want to hire workers directly to take advantage of that power.

Fourth, the CEA report points to the minimum wage literature as evidence of monopsony power. Leaving aside the debate over whether the minimum wage reduces employment (I say yes, the report says no) the literature clearly shows that the minimum wage increases prices. As Daniel Aaronson and Eric French have pointed out, the monopsony model implies that the minimum wage should increase employment and output, thereby decreasing prices. That prices rise is inconsistent with the monopsony model.
It reminded me of something I hate about the minimum wage debate: it is so one-dimensional. Even the "sophisticated" version of it, made by economists who favor the minimum wage, goes something like, "Large firms have monopsony (single-buyer) power and therefore can bid down wages below what is economically efficient." If this is correct, then a minimum wage might be justified. (Still, only under certain conditions.) So they create some theoretical model of employers having monopsony power, or they empirically measure something that sounds like market power (industry share, company size, capital share of gdp) and insist that it suggests monopsony power. Combine this with the mixed empirical record of minimum wages on low-skilled unemployment, and you have a plausible "market power" story. This argument seems to be obsessed only with the question of whether increasing the minimum wage does or does not have a measurable effect on unemployment, as if that were the only relevant point. Opponents of the minimum wage say, "You are putting people out of work." Proponents of the minimum wage say, "No, look at these studies. (Ahem, ignore those other studies over there for the moment.) This pared-down list of studies shows no effect on employment. Therefore there's no reason not to." We can't just point out a few pieces of evidence that are consistent with the "monopsony" story. This story makes a lot of predictions, and the Adam Ozimek piece above points out that many of these predictions are failures. Much, much more in that same vein, Don Boudreaux (who has been absolutely dogged in this fight) quotes an excellent comment by an anonymous economist. Basically, the point here is, "Market power? Monopsony you say? Great. Go explain all this other stuff, too."

I want to take this "Prove to me that there's a disemployment effect" line and turn it on its head. I've made this argument before in a previous post. It's not on us minimum wage skeptics to "prove" that minimum wages have all the harmful effects predicted by economic theory. The onus is on the minimum wage proponents to demonstrate empirically that raising the minimum wage has benefits. That's what you are supposed to do for any kind of treatment, be it medicine or policy: prove that it works. Prove that it's safe. See the figure from my previous post. I pulled that table from Minimum Wages by David Neumark and William L. Wascher. Sure, you get a mixed empirical record on the unemployment question. (Anyone who says there is "no evidence" of an unemployment effect is distorting the empirical record pretty badly. There is evidence of sizable effects, evidence of small effects, evidence of no effect, and evidence of small positive effects. Some take this mixed record to mean the unemployment effect doesn't exist at all, which arbitrarily (selectively?) edits out most of the literature.) But it's not like you get empirical validation of the benefits suggested by minimum wage proponents. You don't see a reduction in poverty, or a more equitable income distribution (whatever that might mean). I think it's not sufficient for minimum wage proponents to argue that the negative side-effects don't materialize. If they are so dedicated to empirical validation, they should be proving that the benefits actually do materialize.

Wednesday, October 25, 2017

Price Discrimination In Insurance

In previous posts I have defended the practice of “price discrimination,” charging different prices for the same service to different customers based on their willingness to pay. Without it many mutually beneficial transactions, perfectly satisfactory to the customer and profitable to the producer, would not happen. Some industries would run at much higher costs, innovation would happen more slowly, and conceivably some business models or even industries wouldn’t exist at all. It makes perfect sense for businesses to offer a high-ball “sticker price,” then try to attract the marginal customer with various sneaky discounts for the same product.

Have you heard about our senior discount?

The software suite is $150. Oh, you’re a poor college student, you say? I mean it’s only $25.

That’ll be $425 for the frames plus the lenses. Of course that’s with the anti-reflective coating. Without the coating, it’s only $250. Oops, they went ahead and manufactured them with the coating anyway. We’ll give you that for free, then.

Examples abound.

The insurance industry is finally getting into this game, within the past decade or so. Insurance companies know a lot of details about their customers. They have big databases of every single customer including dozens or even hundreds of attributes. They make frequent changes to their rates, and they can observe when people leave their company for another insurer. They can use this kind of data to create predictive models that estimate price elasticity. In other words, “What is the probability that Bob will renew his policy if I increase his rates 5%? 10%? 15%?” By doing this on a large portfolio of insurance customers, they can figure out what the effect of a rate change will be on customer retention. In fact, they can use this kind of information to decide how big of a rate change to take. Or (much more likely) for a given rate change, how should this rate be spread around to their various customers? If I know I need $1 million in additional premium, this kind of elasticity measurement can help me minimize the disruption by giving price-insensitive customers the biggest rate increases and price-sensitive customers the smallest increases. It sounds almost sinister, until you realize that every other industry does this in some way. Still, within the industry many consider this a big no-no. It’s called “price optimization,” and it’s sort of a political hot-button right now.

The barriers to implementing price optimization in the insurance industry are both cultural and political. Actuaries, the people who set insurance rates, all study a common syllabus. We all get our certificates from one of two organizations: the Society of Actuaries and the Casualty Actuarial Society. (That’s in the U.S.; other countries have their own accrediting bodies.) We practice our profession based on a set of “actuarial standards of practice,” one of which defines an “actuarially sound rate.” These standards of practice and the pricing equations that we study all tend to assume that the premium charged for an insurance policy has to be based on actual cost. Suppose two people have different risk attributes (age, credit, prior claims, etc.), but my predictive model says they both have an expected claims cost of $650. I have to charge them both the same rate, at least according to various actuarial equations and standards of practice. But actually, maybe they have the same expected claims costs, but one is likely to be a customer for a single year and then leave while another is likely to renew for the next ten years. The longer-retaining business actually saves me on expenses. I have to underwrite (inspect) both policies when I first write them, and this incurs an up-front business cost. But in one case I amortize those costs over one year and in the other I do so over ten years. The per-year expense is smaller for the policy with longer expected retention, so I can justify a discount based on expense differences.

Suppose now I have two people with the same expected claims costs and the same expected expenses. What I can’t now do is say, “His premium is $1,000, but his premium is $1,100 because he is less price sensitive.” Or maybe you can. It’s kind of a grey are. People disagree about whether the actuarial standards of practice explicitly forbid this kind of price differentiation. States vary as to whether they allow it or not. Some have specifically passed statutes banning price discrimination, so case closed. In other states, there is no specific statute outlawing price optimization. But there are various catch-all statutes that incorporate the definition of an actuarially sound rate. A rate has to cover the expected cost of a risk transfer, it cannot be inadequate, excessive, or unfairly discriminatory, etc. (More here.) A state with such a statute can object to price optimization because it runs afoul of the “unfairly discriminatory” clause (which really means some regulator just doesn’t like it), or because the rate isn’t explicitly based on the cost of that business. So anyone playing in this area potentially faces the threat of professional and legal sanctions. Objection letters and filing forms often ask perfunctory questions like "Are you practicing price optimization?" or "Do your models predict price elasticity? Do those estimates impact your rating factors?"

Some very confused regulators mistake cost-based pricing for price optimization. In my example above, I explained how someone who retains longer saves us money. That justifies a cost-based discount that has nothing whatsoever to do with that customer's price elasticity. I expect you to retain 1 year longer, thus saving me (say) 5% on expenses and justifying a premium reduction. These are called "lifetime value" discounts, based on expected retention differentials. Price optimization is different, and it often yields results that run in the opposite direction. Price optimization says, "I'm predicting an additional year of expected retention if I give you a 5% rate increase instead of the cost-based 10% increase. So I'll give you the 5%." (Except this is optimized across and entire book of business, not done one-by-one for individual customers.) In the first case you're not trying to affect behavior. You're simply offering a discount based on the calculated cost-savings from a higher retention. In the second case, you're effectively offering a discount to someone who is unlikely to retain unless you give them the discount. This is true price discrimination: tweaking the price in order to gain the marginal customer. These are very different concepts: lifetime value versus price optimization. But I had a recent back-and-forth with a regulator who repeatedly accused us of the latter, despite our numerous clarifications. (He finally gave up and approved our filing.)

Here is how I envision price optimization being used in actual practice. Insurers run predictive models all the time in order to change their rating plans. The rate differentials based on age, credit, gender, mileage, etc. are always changing, and new variables get added all the time. Adding "annual mileage driven" to your rate plan is going to change things like the male/female differential or the married/unmarried differential, because presumably these things correlate with mileage. So you run a new predictive model and everyone's rate changes a bit. Except some people's rates change a lot. Whenever we re-run these models, there is a bell-shaped curve of indicated rate changes. Some people are out on the tails getting big increases and big decreases. This is where price optimization comes in. Perhaps we don't immediately implement our new rating factors and cause all of that disruption to our book of business. Instead, we run a "price elasticity" model and do some complicated math to figure out how to spread these changes to the rating plans over several years. Instead of ramping my "male" factor from 1.05 to 1.20 like my loss model says to do, I'll raise it first to 1.14, then to 1.20. Or maybe I'll make this adjustment over three or more years. A sophisticated enough process can optimize what this transition looks like. That's the value of price optimization.

What I believe will not happen is the following: an insurer identifies a segment of the population that is particularly price insensitive, then permanently charges them a higher rate. That overcharged segment of the population will eventually leave that insurer for one offering a true cost-based premium. Insurers know this and want to retain that business at a profitable price. This price-insensitive segment may see a slower movement toward the cost-based premium, and this may mean temporarily higher-than-cost-indicated premiums. But it will probably not be permanently overcharged. I think this is what concerned regulators and consumer "watchdog" groups have in mind when they disapprove of price optimization, and it is unlikely to happen in an industry as competitive as personal-lines insurance.

Price discrimination is economically efficient. It's a way to maximize consumer and producer surplus. It means charging more to people who don't mind paying, and less to people who are do mind. It doesn't imply overall higher prices, as naive observers tend to assume. The "discounts" and "surcharges" tend to cancel overall. I've seen textbook models that describe price discrimination as "the producer captures much more of the consumer's surplus." This is a faulty model. I think it's more accurate to think of price discrimination as a set of discounts and surcharges that net out to zero, but that increase the overall surplus by making otherwise unpalatable transactions possible.

Tuesday, October 24, 2017

It's Really Hard to Improve Things

I really love this piece by David Friedman.
Suppose you are designing a race car; further suppose that you are very good at designing race cars, and so get everything right. You face a variety of tradeoffs. A larger engine will increase the car's power to accelerate, it will allow it to better overcome wind resistance—but it will also weigh more and require a larger gas tank, which will increase the car's mass, reducing the gain in acceleration and possibly making the car more likely to burst its tires or skid out on a turn. Similarly with the size and shape of tires, width of the wheel base, and a variety of other features. 
Your car is designed, built, and it and its close imitators are winning races. A critic points out that you obviously have it wrong; the engine should have been bigger. To prove his point, he builds a car that is just like yours save that the engine is half again as large. Testing it on the straightaway, he demonstrates that it indeed has better acceleration than your car. He enters it in a race against your car—and loses.
The piece is not about a race car. It is more generally about how hard it is to improve things. Something you find in the world has probably undergone some sort of  optimization process, which would tend to punish deviations from the global optimum. Sure, you can make some marginal improvements here and there if you think very carefully about them. But you are unlikely to improve things by tearing out everything by the roots and starting over. You have to be extremely careful even about asserting you've made a marginal improvement. Are you just measuring one thing? Or are you measuring (and optimizing) across all the dimensions of interest? Perhaps there are some you haven't thought of.

I recently finished reading Uncontrolled by Jim Manzi for the second time. (Good Econtalk interview here.) It's a rewarding book. It's on the same topic as David Friedman's post: trying to make things better is really hard. You often don't have the information necessary to make improvements. Will it increase your company's profitability to change your name from "Fast Mart" to "Quick Mart"? Maybe you own a thousand stores, half called "Fast Mart" and half "Quick Mart." Can you figure this out by comparing the average profitability of both groups? Of course not. Maybe you called your inner-city stores "Fast Mart" and your rural stores "Quick Mart" so they aren't really comparable. What if you run a regression, controlling for all relevant features that might drive profitability? This might be slightly more relevant than a pure comparison of averages, but you still don't know if there are hidden variables or things you didn't think of or perhaps can't even measure. You have to do an experiment. You have to randomly change the names of a few stores, enough that you get a true statistical signal of improved profitability that can't be attributable to noise. (Manzi actually uses the Quick vs Fast Mart example in his book.)

Experimentation is the only way to truly establish causality. And it is generally a low-yield process. Many modern companies are doing dozens, hundreds, even thousands of experiments. It's called A/B testing: one randomly selected group of consumers gets treatment A, the other gets B. Is there a difference? If so, you can be fairly confident that the thing you were testing caused the difference. It can be something simple, like changing the color of a webpage, or making a follow-up customer service call. This is all in order to eke out tiny marginal improvement in one small metric of success. Do enough of these marginal improvements on a routine basis and you might just barely stay ahead of your competitors. But rarely are you going to make one grand decision that will double your profits. (A few rare exceptions come to mind. I think the iPhone really was a product of grand designs, not a bunch of tweaks at the margin. These game-changers are rare beasts, at any rate.)

Now take government. Government does not do this kind of careful tweaking at the margins. In fact, government programs almost don't respond to feedback at all. They are seldom rolled out in a careful experimental "test shot" manner, such that the effectiveness can be assessed and the program adjusted or ended. Even after blatant, severe policy failures, the initial supporters almost never admit to mistakes. The politicians who implemented those policy failures almost never call for repeal. And yet the sweeping changes implemented by government are huge in comparison to the marginal tweaks made by private businesses. They tend to apply not just to the customers of a single company, but to the entire nation (or state or city) all at once. I find this to be unbelievably irresponsible. Government should start incredibly small and make incremental adjustments, always being prepared to declare failure and abort the experiment. Instead we get, "Let's overhaul from the ground-up an entire industry!" Or, "Let's ban this entire class of substances!" Or, "Let's spend a trillion dollars to 'stimulate the economy', based on widely disputed economic models!" We should expect from the start for this to go very wrong. At least governments face budget constraints. The generosity of taxpayers is limited, thank goodness. (I know, "generosity" is the wrong word here.) And after all you can't spend all of the government's revenue on every single social problem. Still, programs grow to absurd proportions. They persist long after their failure has been demonstrated beyond a reasonable doubt.

I can think of a few exceptions. The failure of alcohol prohibition in the 20s was so great and so obvious to the casual observer that even most supporters did an about-face. (Then again, drug prohibition persisted for a century despite having all the same horrific consequences of alcohol prohibition.) Many people who endured Nixon's peacetime wage and price controls could see the damage those policies caused in real time. Then again many failed to learn that lesson, and I suspect this generation might "forget" those historical lessons and have to re-learn them from scratch. Government failures sometimes get corrected, but it's rare and it takes far too long.

Think for a moment about making even small tweaks to the lives of individuals. Say you have the power to change things any time you think you know better. You look at someone's shopping cart and swap one item for another. You look at someone's daily commute and say, "Don't take that route, take this one." You look at how someone spends their free time and say, "Stop playing those video games and read this book instead." What are the odds that you can actually improve their life? You'd have to have tons of information about that person's preferences. Unlikely, considering the best information about their preferences is "the stuff they are already doing!" You'd also require tons of information about the possible alternatives to what that person is actually choosing. And you'd have to know how one decision interacts with others. Maybe the thing you swapped out of their shopping cart was an ingredient to a casserole, and that's not even obvious from looking at the shopping cart because the shopper already has everything else at home. Maybe your proposed commute route isn't as scenic. Maybe he drives by the store on his way home from work because doing so forces him to remember that he needs to buy groceries. He has figured out that if he takes any other route he'll forget. Your new route forces this person to spend more time on the road, because he now has to make special trips. Maybe the gamer has learned that he just hates reading books, but he is listening to audiobooks during the day and voraciously devouring free content on the internet. Maybe you are the philistine, both for dismissing his perfectly enriching hobby and for dismissing his other forms of information consumption.

I think this is hard even when you have a lot of information about the person whose life you are adjusting. Now think about how government works. A blunt-force instrument is applied to everyone at once. This is likely to go wrong. Be it a ban or a mandate, a tax or a subsidy, any single "tweak" applied to the entire population is likely to be ill-suited for a large proportion of those affected. When politicians see market institutions they don't like (say, payday lending) and seek to eliminate them, they are likely to get this wrong. When politicians see a market price they don't like and insist it must be much lower (pharmaceuticals) or much higher (wages of hourly workers), they are likely to get this wrong. There are reasons for that much-reviled industry, which seems to attract customers despite the objections of third parties. And there are reasons for prices to be that high and wages to be that low. These things come out of an optimization process that nobody quite understands. It's possible that a tweak here or there can cause a massive improvement, but it's unlikely. We should have a strong prior against "simple tweak causing massive improvement that for some reason just hasn't happened yet." A massive push is less likely still to be an improvement.

Arnold Kling captures the idea well in this podcast from a few years back. Here is my attempt to make Kling's point in my own words:
Imagine a CEO of a major company. Maybe they're an auto manufacturer, and the CEO says, "Now we're going to branch out and acquire some other businesses. We're going into shoe manufacturing, paper, computers, textiles..." Obviously this would be a big mess. It would be a hugely wasteful venture from the point of view of society. A lot of the stakeholders would lose out: shareholders would lose value, employees would lose their jobs, soon-do-be-disappointed customers would maybe experiment with these new products of questionable quality from this business of ever-expanding scope. At least in this case, though, the discipline of the market stops it from growing out of control. Budgetary constraints and discipline from shareholders and customers who ultimately say "No" will put a stop to it. But take away those constraints and this wasteful monstrosity grows without bounds. That's my model of government. 
(Once again, NOT a quote. That's my own paraphrase of Kling's point.) This is a deep point about scope and scale. It may be desirable to grow in either dimension, but always carefully, always after doing your due diligence, and always with a release valve (or is it an escape hatch?) for when things go wrong.

___________________________________________________________

This is not a general argument for the status quo. Many "status quo" things are bad government policies that we've been stuck with. If we can identify government programs that failed to have any measurable effect on the targeted social problems, we should end those, some immediately some by a scheduled phase-out. Uncontrolled makes the case for "status quo with gentle phase-outs for stuff that probably isn't working." I'm far more cavalier about saying, "Let's end government programs if they are morally outrageous, or if simple economic analysis implies they are of dubious value." I take the position that government should start close to zero and slowly adjust upward, rather than that it should start where it is and slowly adjust downward. But I'd gladly take the latter if it becomes an option.

Monday, October 23, 2017

Steven Pinker Explains Sexual Prudishness

The excerpt below is from Pinker's book The Blank Slate, which I highly recommend reading. It is an absolute breath of fresh air in today's political environment. (Read it and you will recognize that a lot of the things happening on today's college campuses are not new.)

I am no prude. But I think Pinker explains well why traditional societies have prudish attitudes about sex, and why even "modern enlightened" societies have mostly failed to completely shed these social norms. Pardon the name-drops in the following excerpts, but I don't think you have to get the reference in order to understand Pinker's point:
Even in a time when seemingly anything goes, most people do not partake in sex as casually as they partake in food or conversation. That includes today’s college campuses, which are reportedly hotbeds of the brief sexual encounters known as “hooking up.” The psychologist Elizabeth Paul sums up her research on the phenomenon: “Casual sex is not casual. Very few people are coming out unscathed.” The reasons are as deep as anything in biology. One of the hazards of sex is a baby, and a baby is not just any seven-pound object but, from an evolutionary point of view, our reason for being. Every time a woman has sex with a man she is taking a chance at sentencing herself to years of motherhood, with the additional gamble that the whims of her partner could make it single motherhood. She is committing a chunk of her finite reproductive output to the genes and intentions of that man, forging the opportunity to use it with some other man who may have better endowments of either or both. The man, for his part, may be either implicitly committing his sweat and toil to the incipient child or deceiving his partner about such intentions.
And that covers only the immediate participants. As Jong lamented elsewhere, there are never just two people in bed. They are always accompanied in their minds by parents, former lovers, and real and imagined rivals. In other words, third parties have an interest in the possible outcomes of a sexual liaison. The romantic rivals of the man or woman, who are being cuckolded or rendered celibate or bereft of their act of love, have reasons to want to be in their places. The interests of third parties help us understand why sex is almost universally conducted in private. Symons points out that because a man’s reproductive success is strictly limited by his access to women, in the minds of men sex is always a rare commodity. People may have sex in private for the same reason that people during a famine eat in private: to avoid inciting dangerous envy. 
As if the bed weren’t crowded enough, every child of a man and a woman is also the grandchild of two other men and two other women. Parents take an interest in their children’s reproduction because in the long run it is their reproduction, too... 
It goes on from there. Pinker makes several mentions of the ideal of completely casual sex with no emotional baggage or bad consequences, the "zipless fuck" as he repeatedly calls it. He appeals to evolutionary psychology to explain why it's improbable.

I remembered the presentation being slightly different. I thought he had said something like, "Mary and John have sex. Did everybody have a good time?" Then he starts to regale the reader with all the issues of single motherhood, jealous rivals, cuckoldry, etc. (Maybe he gives this alternative presentation in another book? Or did I imagine it?)

I can't find the passage at the moment, but Thomas Sowell also attempts to explain why prudishness about sex might be a practical strategy. He states quite frankly that in a society that is barely above the subsistence level, which describes the condition of pretty much everyone who ever lived up to 200 years ago, a baby that hasn't been planned for would be a disaster. The family knows that they will become the responsible guardians of their daughters' children if those daughters have sex with feckless men. You might thus begin to understand why fathers, mothers, brothers, sisters, aunts, and uncles take an interest in the sexuality of related young women. It helps explain why we might have evolved a natural protectiveness of our daughters and other relatives, as well as why we have strong cultural norms doing the same. It also explains why we aren't quite as obsessed with the sexuality of related young men (and Pinker very clearly draws out this distinction).

To explain is not to forgive. None of this is trying to justify outdated instincts or traditions that brutally suppress the independence of young women. (Indeed, in a different section of Blank Slate, Pinker uses the example of female genital mutilation to demonstrate the awfulness of "cultural relativism" taken to the extreme.) I think he is simply cautioning that these evolved instincts and cultural norms are sometimes there for a reason. We should not simply carry on as if these tendencies will disappear overnight, just because we think we can prove they are irrational. If we plan to adjust our cultural norms or override our instincts, we should tread carefully. Chesterton's fence!

________________________________________________________

At Slate Star Codex, Scott Alexander sometimes posts about polyamory. My impression is that it's fairly common in the rationalist community, and many groups make it work without all the jealous rivalry mentioned above. I can believe that a group of extremely high IQ, socially aware individuals dedicated to rationalism can make this work. I'm less confident that it will "catch on" for the wider population. But I am prepared to be proven wrong.

Sunday, October 22, 2017

Layers

"Hey, you know all those blog posts you wrote about meta-blogging? You should write those up into one neat summary. But what to name it?"

Insurance Regulation Part 2: Some Odds and Ends

In a recent post I outlined some of the problems with insurance regulation. I'll try to briefly mention some things that I either forgot to say or failed to emphasize without trying too hard to wrap it up in a cohesive narrative. Here are some scattered thoughts.

People really do turn back into human beings once you get them on the phone and talk to them like they're people. Kurt, impolite e-mails and official letters turn into, "Oh, hello! How are you today!" A few boorish oafs (or oafish boors?) do not undergo this reversion to human form, but most do. I think my other post overstated the rudeness and understated this very positive point. People like to think they are nice and usually try to treat others with dignity. Professionals like to think of themselves as having an earned sense of integrity. I'm thinking of Russ Roberts' description of Adam Smith: People like to be respected, but also to have an internal sense that they are actually respectable. We like not just to be loved, but to actually be lovely. In a phone conference with one regulator who I had had previous dealings with in a different role at my company, he remembered me and said, "The last time I spoke to you, you were about to run out of the office because your baby was due any day." One regulator rapped with me on the phone for about 15 minutes after we had closed out our filing. Her professional obligation was done, but she wanted to chat, one colleague to another. Another e-mailed me after I received my designation to congratulate me, long after we'd had any kind of ongoing business-relationship. The world isn't just "incentives" all the way down. People are funny. Most of them are kind of pleasant.

There is a serious "rule-by-checklist" mentality in insurance regulation. So long as someone can check something off their list as "handled" or "answered", they don't really care whether it's 1) completely irrelevant (so long as I get my check!) or 2) technically "answered" but still a problem. I think this is a feature of government, and bureaucracy more generally. A private enterprise can keep changing the rules to avoid rule-gamers and adjust to new trends. A government agency doesn't really face any consequence for having an out-of-date checklist. Governments are also bound by statutory authority; they can't just go banning shit they don't like...usually. Private enterprise, being private, can set the rules within the walls of their own buildings and adjust them as quickly as necessary. If their checklists are out of date, they lose money and lose out to competitors. Government regulation is complying with an out-of-date checklist. Market regulation means trying to ensure your internal checklist is up-to-date.

I don't think my previous post truly captured the banal tedium of the back-and-forth objection-and-response process. Regulators will object to a rate filing. The insurer might respond in full detail, or they may give just enough to barely answer the question and hope that's sufficient. Is the regulator asking this question because they are sophisticated and anticipate a specific "wrong" answer? Or is the regulator clueless and confused? Should we get them on the phone to clarify? Or should we simply fire off a borderline adequate response letter and hope they'll accept? Sometimes we'll be waiting weeks or months for approval, and instead yet another objection letter will show up. Sometimes this happens at inopportune times. Maybe we built in plenty of time for the objection/approval process, but the regulator still dragged their feet. So the objection comes when the actuary who needs to respond to it will be on a week-long vacation, or out studying for an actuarial exam, or (as alluded to above) out caring for a newborn baby and exhausted mother. One state will sit on the filing for two months, then send a 20- or 30-question objection letter. The little icon on Microsoft Outlook that shows you have an unread e-mail? At one point it drove me into a near-fit of anxiety every time I saw it because I thought it was an official correspondence from (say) the State of Florida, arriving just in time to ruin my week. (It took another year to unlearn this anxiety after I had moved out of that role.) It's a little unfair to say they "sit on" our filings. I'm sure they are very busy and don't have the resources to thoroughly review all filings in a timely matter. But knowing this, they should show some professional courtesy and grant us a little lenience. Meanwhile, through this delay, our IT folks are waiting to implement the change, and we have to keep telling them to hold off because we don't have approval. If we have a tight schedule for our IT crew, a delay on one state's rate change can have knock-on effects in which all subsequent rate changes are delayed, until they have time to catch up.

I meant to make an "in their defense..." point in my previous post. Maybe all this regulatory oversight looks stupid if you look at the actual actions taken. But maybe it's actually keeping us honest? The marginal regulatory action looks pretty silly and unnecessary, but the existence of this threat is keeping us from cheating our policyholders. In his book Breaking Rank, former police chief Norm Stamper describes a cop cursing out a "sleazy" defense attorney, who always stands up for the bad guys and sometimes get them off when they're really guilty. The defense attorney responds that, if it weren't for him constantly busting the prosecution's balls, the prosecutors and police would get lazy. They'd slack in their investigations, deny their suspect their constitutional rights more often (given that there wouldn't be anyone pushing back), and put more innocent people in prison. So maybe all this regulation looks silly to me but is acting like a back-stop against the bad stuff we would try in an unconstrained world. I don't quite buy this justification, but it had occurred to me so I wanted to share it. The insurance market is highly competitive. There are hundreds, maybe thousands, of personal lines insurers. Any screw-up we make is an opportunity for a thousand other players to steal our business. But, I don't know, maybe we'd all get lazy and slack, in the exact same way, making the exact same infractions, if not for regulation? I'll just shrug and say "Maybe."

There is an actuarial exam on the history of insurance regulation (well, maybe 30-40% of the syllabus is on this topic). The syllabus for this exam contains several papers that in some way or another outline various regulatory failures. There used to be a paper on failed attempts to constrain rate plans. Michigan, for example, said that you could only have a few rating territories, and furthermore that the difference between rates could only be "this much." This was an attempt to keep premiums low in Detroit. It failed, because some companies could write in Detroit while others wrote in the rest of the state. In fact, the same company could write business in two different charters (separate companies under the same parent company), rendering the regulation irrelevant. There is also a long discussion of "risk-based capital", or RBC. This is a set of rules for discounting various kind of assets and comparing them to the company's liabilities. Perhaps government bonds are worth 100% of their face value, but maybe junk bonds are only worth a fraction of their face value, to account for the fact that they aren't reliable. The punch-line: RBC had almost no power to predict insurer insolvency. I mention this because solvency regulation came up in the comments of the previous post. In theory, I admit this superficially seems like a legitimate case for government regulation. In practice, you need an almost omniscient government to actually spot the insolvent insurers.

If I think of more, I'll do another post.

Friday, October 20, 2017

Thomas Sowell on Checking Citations

This is from Thomas Sowell's excellent autobiography Man of Letters:
[M]y mentor in economics, the late Nobel Laureate George Stigler, once suggested that anyone who spent an afternoon in a library checking up on footnote citations was likely to find the experience disillusioning. Years later, when I had occasion to follow the trail of a footnote on a familiar proposition in labor economics, I found that the evidence for it collapsed like a house of cards. My wife, an attorney, says that she has similar experiences when following up on citations in court cases.
 I've had similar experiences. Either the citation backing some claim is from an extremely weak paper, or perhaps the cited paper doesn't at all support the claim.

Man of Letters is great. It's literally a collection of letters he's written to people over the decades. His narrative autobiography, A Personal Odyssey, is also excellent.
_______________________________________________________________

I'll try to do more posts like this. I have a lot of highlighted pages in my Kindle books and even physical books. I might as well share the things I've found interesting.

Contradictions

The limited success of a few charter schools and private schools doesn't scale up very well. That's why we should stick with public schools.

Research shows that people behave irrationally in the marketplace, lack basic information, and are easily misled and fooled. Therefore we should give them much more power as voters. 

We're using up our oil too quickly. Also, the price of gasoline is too high.

You can't reward teachers with merit pay or fire them for poor performance, because nobody knows how to measure teacher performance. Also, you need this certificate to be a teacher in this state, because the certificate packages together all the prerequisites that make someone a good teacher.

Drug users are irrational about the risks of drug use. That's why we need to ban drugs. With legal penalties in place, people will be rationally deterred by the legal risks associated with drug use.

I have a "COEXIST" sticker on my car, right next to my "I BUY LOCAL" sticker.

Others?

Tuesday, October 17, 2017

Really Bad Allies on Drug Legalization

A while ago I wrote a post about how progressives and other leftists make pretty flakey allies on drug reform issues. 

Jacob Sullum has a post today at Reason illustrating the same problem
Today Tom Marino, the Pennsylvania congressman whom Donald Trump nominated to head the Office of National Drug Control Policy, withdrew his name because of a bill he was publicly bragging about just a year and a half ago. That bill, the Ensuring Patient Access and Effective Drug Enforcement Act of 2016, was uncontroversial when it was enacted. Not a single member of Congress opposed it. Neither did the Justice Department, the Drug Enforcement Administration (DEA), or President Obama, who signed it into law on April 19, 2016. Yet Marino's sponsorship of the bill killed his nomination because of the way the law was framed in reports by 60 Minutes and The Washington Post.
Here was a bipartisan bill without any congressional opposition. It limited the DEA’s power to keep people away from their pain medication. Thank goodness for that. But the Washington Post and 60 Minutes decide to spin this story about a sinister industry-sponsored bill that limited the DEA’s power to prevent the opioid crisis. “Simpleminded narrative” indeed. If “opioid epidemic” means heroin users accidentally overdosing on fentanyl, that is the DEA’s fault to begin with. Giving them more power to harass pharmaceutical companies wouldn’t help anything. In fact it would almost surely make the problem worse.  If people could buy pharmaceutical grade heroin from a licensed dealer, this rash of overdoses would not have happened in the first place. By cracking down on dealers and disrupting the market, the DEA is making the dosage of street heroin unpredictable.


The Tom Marino thing is really a non-story, but here’s what I think is actually happening. These guys are willing to side with the DEA, against pain patients and their doctors, in order to sink Trump’s nominee and embarrass his administration. And they’re willing to tell this “evil pharmaceutical companies are bamboozling naïve patients into taking medicine they don’t need that’s actually bad for them” narrative. Shame on them. The leftist worldview assumes that mere “bigness” grants large companies power over their customers. This makes meaningful drug reform almost impossible, because leftists will try to hold companies (drug manufacturers) responsible for the misdeeds of their customers. I think most leftists are at least nominally in favor of "ending the drug war" and instituting harm-reduction policies. None of this will work if they start berating the first company that sells pure heroin because some fools use it recklessly, become hopelessly addicted, or take way too much and overdose. A stable, legal drug market is necessary if we want to implement meaningful harm reduction. If legal suppliers immediately see themselves on the wrong end of firm-wrecking wrongful deaths suits and theatrical grillings in front of congress, that's not going to work. This "blame the supplier" ideology obliterates the responsibility of the individual. It destroys the very concept of self-ownership, the notion that we have sovereignty over our very bodies. It also indulges a particularly toxic brand of economic populism.

I'd gladly be part of a political coalition for drug policy reform, even if it included people who I profoundly disagree with. But if it means signing on with people who are poised to sabotage any actual progress in order to score some cheap political points (and I think dumping on the Marino bill counts as a shining example), count me out. 

Sunday, October 15, 2017

The Regulatory State in Practice: The Insurance Industry

People would have far less faith in the regulatory state if they saw how it works day-to-day. In this post I'll share some of my experiences with the regime I am familiar with: personal lines insurance regulation.

Sometimes I'll give my standard libertarian argument for limited government, and somebody will make a knee-jerk, unserious comment about how "Of course, we need some regulation. Otherwise unfettered greed will rule." I don't think so. Whether regulation in practice fetters greed or exacerbates it is really an empirical question. It depends on how good your institutions are, how observant and diligent the voting public is about disciplining the regulatory state, whether it's possible to align the incentives of the regulators with the interests of the public, the relative costs of free versus regulated markets, and lots of other things. I think in almost all cases the best regulation is market discipline without any government augmentation. But in this post I want to narrowly focus on the regulation of personal lines insurance and suggest that maybe some of these lessons generalize.

I am an actuary. Part of my job is to defend my employer's rate filings to regulators, who are always looking for reasons to reject them. First, a little bit about how this works. Personal lines insurance (home/renters and auto policies) is regulated at the state level by each of the 50 states, rather than being regulated at the federal level. Each state has a Department of Insurance, or "DOI". (A mean and immature joke is to pronounce that acronym out loud.) Each insurance company has a rate structure that is explicitly written down such that any two people who are identical on paper get exactly the same price. Prices can vary by rating territory (usually groupings of zip codes and/or counties), age, gender, marital status, credit history (surprisingly predictive of auto and home-related accidents!), and prior claim history. But the insurer has to specify exactly how this works in a rate filing, and has to use exactly those rates until it makes another filing amending that structure. Typically it's something like: $500 base rate for the rating territory you live in, times a factor of 2.0 for your age, gender, and marital status, times a factor of 0.5 for your good credit history, times 2.0 for having multiple prior accidents, so your rate is $500 * 2.0 * 0.5 * 2.0 = $1,000. (This is just an example; it would be an extremely simple rating structure that no insurer could actually get away with in today's marketplace.) I can't just say to this customer, "Been shopping around, eh? Can't find anyone else who will write you a policy at under $2,000, huh? That'll be...$2,000!" I have to charge this customer exactly what my rating algorithm calculates or I am in violation of state law. Any insurer found deviating from their filed rates would be severely fined. There might be some ambiguities about what rate to charge. Maybe the Postal Service redefines zip-codes mid year, and the customer's zip code doesn't map to any rating territory, so I have to place them in the most reasonable one. Or maybe their marital or credit status changes and my rate plan failed to specify how quickly I will reclassify them, such that a divorced person gets the "married" rate or a person with improving credit is temporarily being dinged for their poor past credit (or someone with deteriorating credit is temporarily benefiting from their good past credit, which is far more likely in my experience). But these ambiguities are a small part of the game. For the most part, the rate is spelled out clearly and unambiguously.

Typically, a company does a rate filing for every state at least once a year. This means an actuary has to write up a long report full of data (claims paid, premiums received, expenses incurred, investment income received, rate differentials by territory or classification) and submit it to the state DOI. Then a regulator at the DOI looks it over and either 1) approves it or 2) sends the insurer an "objection letter" stating the many things they don't like about the filing. There is a huge difference between state DOIs. Some are extremely lenient and will rubber-stamp approve almost any filing, as long as it's reasonable. Unless you're increasing rates by 100%, or implementing an explicitly racist class plan, these states will approve your filing very quickly. Bless them. (Typical overall rate changes are in the 5-10% range, usually just keeping up with inflation. And rating based on race is explicitly against the law in every state, and probably a violation of federal law, too.) Other states are extremely picky. Sometimes they are nettlesome for no particular reason. Sometimes the regulator does not have any statutory authority for their objection, but rather they are objecting to something that they just don't like. Most states have some catch-all statute regarding insurance regulation that reiterates the definition of an actuarially sound rate: A rate is reasonable and not excessive, inadequate, or unfairly discriminatory if it is an actuarially sound estimate of the expected value of all future costs associated with an individual risk transfer. Emphasis mine. Do you see a problem with this? "Unfair" is extremely subjective. A statute reiterating this principle basically gives the regulator carte blanche to object to anything they don't like. One state (a very, very northern state) will cite statutes in their objections, but when we look them up we always find that they refer to this boilerplate language about actuarially sound rate-making. It's almost never a reference to a law explicitly banning something in our proposed rating structure.

If regulatory overreach is one annoying problem, regulator incompetence is another. Often the regulator is not an actuary or is not otherwise technically savvy. Or sometimes they are actuaries who lack the specific technical expertise to understand the rate filing. Insurers are increasingly sophisticated in their pricing and using increasingly complex methods for segmenting risks. The industry standard for a long time has been the generalized linear model, or "glm". If you've ever found the "line of best fit" through a bunch of points in a math class, the glm is just a more sophisticated version of this, with an arbitrary number of dimensions (not just the two) and with different kinds of penalties for "missing" points on the scatter-plot. A glm is not all that complicated. Using one gives you a rating plan with multiplicative factors, as in my example above. The model tells you: "Multiply by 1.05 for male, 1.2 for unmarried, 0.90 for fair credit history, 1.00 for no prior incidents..." Simple as this is, I would say that most regulators are pretty clueless even when it comes to glms. But insurers are increasingly using things far more complex than these. Gradient boosted models (gbms) are insanely complex decision trees with thousands of branches. Neural nets are extremely complex systems of variable weights and triggering-thresholds. Increasingly these even more complicated models are being used to design (at least to inform) our rating plans, and yet many regulators are still perplexed by the relatively simple glms.

We try to do our best when it comes to justifying our glm results, but honestly most regulators wouldn't know what they were looking at if we gave them a filing exhibit spelling out everything in perfect detail. Sometimes they ask telling questions that betray their lack of understanding. I literally had an objection letter once that asked, "What is 'multivariate analysis'?" Perhaps it's a non-standard term, but anyone even remotely familiar with recent trends in the industry would know this is a reference to glms and related methods. It is in contrast to "univariate analysis", in which the mean for each group is calculated and the relative averages are used to set the rate differentials. For example, "Males cost 1.2 times as much as females to insure, so apply a 1.2 factor to males and a 1.0 factor to females." The "univariate" approach is wrong, because males could have other risk factors driving the difference. Maybe the average male customer for our company is younger, has worse credit, etc. A glm automatically accounts for these correlations between different rating variables. That is why we use them. None of this is terribly obscure, either. The reasons for using glms are described in detail and the methodology is fully fleshed out in several of the actuarial exams (grueling industry exams that people in my tribe have to take to earn our designation). Another typical question is something like, "How do you avoid double-counting if two rating variables overlap?" or "How do you adjust for correlations between rating variables?" The answer is that I don't have to, because I'm using a glm. A colleague once asked me how I answered such questions, and I said something like the previous sentence. We busted up laughing, because my blunt answer (which I would never really give to a DOI) points out how thoroughly the questioner is missing the point.

Another typical question is something like "Please provide the data used in this analysis." Once again, this betrays a complete lack of understanding. The underlying data in a glm is a gigantic table containing millions of records, probably in the tens of gigabyte range for a decent sized insurance company. The regulator doesn't actually want this, and probably doesn't have the technological capacity to even accept a file transfer of this size, and almost certainly could not perform an independent analysis if we sent it to them. At any rate, it would completely compromise our competitive position and (more importantly) our policyholders' privacy/security if we were to send around such a comprehensive database of our customers and their claims payments. (DOIs aren't always so diligent about security. I have seen pages from competitor filings marked with big red letters saying "CONFIDENTIAL", as in "The insurer marked this as confidential but the state DOI did not honor their wishes. They just published it with everything else, because they couldn't be bothered to separate out the 'public' from the 'confidential' files.") My best guess is that the person asking for "supporting data" is still in the univaraite mind-frame. They think they are asking for a few summarized tables showing, say, claim payments by gender (or age or credit), number of policies in each category (termed "exposures" in the industry), and a loss relativity, thus supporting the rating factor for each variable. Unfortunately there is no way to fairly "summarize" the data underlying a glm. The entire database goes in, and the rating factors come out. It's a sophisticated calculation that requires all the data at once.

Sometimes there are "filing forms", which are lists of questions that we have to answer in our filing which are the same each time we file. At least the DOI is telling us ahead of time what it wants, rather than asking for several rounds of clarification after-the-fact. In theory, this can be a time saver and allow us to preempt questions and get the filing approved more quickly. In practice, these are a waste of time and can open up the insurer to further rounds of questioning because they DOI doesn't understand the answer to the question it asks. ("Give me a statistics lecture! Mmm hmm. Mmm hmm. And what is this 'multivariate analysis' you speak of?") These filing forms frequently betray a lack of understanding. One that I helped fill out recently asks about a "test for homoscedasticity." Homoscedasticity means that the points are evenly distributed around the best fit line; they aren't closer to the best fit line for small values and further from the best fit line for large values (or vice versa). The question betrays ignorance about glms, because in a glm you explicitly relax this assumption. A traditional linear model insists on normally distributed residuals with a constant variance; a glm allows one to choose a gamma or poisson or some other kind of error structure, which allows the variance to be a function of the mean (the y-value of the best-fit line). If that's all very confusing, don't worry about it. What's happening here (I think) is that someone copied and pasted a few lines of text from a linear modeling textbook without understanding what they were copying. Many filing forms ask about the R-squared or adjusted R-squared, and ask if the residuals are normally distributed (essentially reiterating the "homoscedasticity" question without realizing they've asked the same thing twice!). Once again, they are failing to understand the very basics of a glm, a standard insurance industry tool. These questions apply only to traditional linear modeling and don't apply to the glm world.

Don't mistake me as saying that regulators should develop a sophisticated understanding of these models so they can really grill insurers about how they are being used. Some moderately sophisticated regulators do ask reasonable questions about methodology. ("Did you control for geography? Did you offset with your limit and deductible factors?") The problem here is that there are a thousand "right" ways to do something. One modeler might think it's absolutely necessary to "offset" your model with your coverage limit factors (which are more appropriately calculated outside of the glm; this is the 50/100/25 or 100/300/50 that you see on your insurance policy in your glove box). Another might think it's okay to not offset, so long as you have the various limits in your model as a control variable. Another might think it's okay not to even bother with this control variable, because every time she's ever done this in the past, she got the same factors with and without controlling for limit. It would be a mistake for a regulator to assemble a list of "best practices" from the actuarial literature and start grilling every insurance company about whether they're complying with those standards or not. (And "Why not!?") I've talked to very senior glm builders, gurus for the profession, who have very different ways of building these models. It's a mistake to think there's a "right" way of doing things. It would be wrong to waste time and resources demanding that a company show the results if the model were built some other way. At best, the technically competent regulator should see their role as a guiding hand, perhaps gently suggesting that an unsophisticated insurer might get a better result if they built their model some other way. But they shouldn't be grand-standing on their checklist of best practices and holding up someone's rate filing.

Regulators vary in their level of rudeness. Some are extremely boorish. I guess they figure you aren't really a "customer." You have to deal with them and accede to their demands. I guess they figure that if courtesy takes any effort at all, it's not worth it. Fortunately, most of these people turn back into human beings once you get them on the phone and they have to talk to you. (Most.) But even in the case of a "polite" regulator, this person is often asking for lots of unnecessary busy-work.  This person wields the power of the state, and can use it to uphold your filing. The resulting busy-work can result in hundreds of man-hours of labor and tens or hundreds of thousands of dollars in lost revenue due to unnecessary delays.

Sometimes incentives are poorly aligned. Many states use outside consulting agencies to review all rate filings. Many of these agencies are paid by the hour, or awarded for each "infraction" they find. So they have an incentive to create busy work to create billable hours and find "infractions" no matter how trivial. A company I worked for once got fined after a "market conduct exam" because our rating manual said we would surcharge customers who paid late, but we never did surcharge them. I think it was just a matter of us wanting to have something to threaten late-paying customers with, but not actually wanting to annoy them every time they paid late. So we never put in place the process to actually surcharge them, or we had a process but never pulled the trigger on it. It's the kind of reasonable latitude that companies grant their customers all the time, but these regulators saw an opportunity to fine us and they pounced.

Every state has an insurance commissioner, who generally oversees the state's DOI. Some are elected and some are appointed. Elected commissioners might face different political incentives than appointed ones. Appointed commissioners usually are older insurance professionals who have some interest in public service. They might be more technically savvy. They typically understand that prices have to go up to keep up with inflation, that price differentiation is necessary to a functioning insurance market, that locking in low rates will make insurance less available, etc. These people may understand things about the realities of insurance pricing that the voting public doesn't. Elected commissioners, on the other hand, might campaign explicitly on a platform of "I will not approve any rate increases." A populist back-wind may allow these commissioners to behave incredibly irresponsibly and compromise the insurance market in their state. They end up not approving reasonable rate increases, or placing unreasonable caps on rate increases, or holding up rate filings for months before finally relenting when things aren't going well.

With all this regulation, what benefit does the insurance customer actually see? Surely they get a rate that's, say, 10% lower, right? No. That would be an absolutely intolerable rate inadequacy and no insurer would stay in that market for long. Insurers actually have higher insurance premiums because of regulation. We have to hire teams of people to stay informed and up-to-date on regulations and various law changes. We occasionally have to physically fly representatives to rate hearings in other states. We have staff dedicated to preempting and responding to regulatory actions. All of this is ultimately paid for by the insurance customer. There is no one else to pay it! The regulatory lag I mentioned above may not actually cost the insurer any revenue. More likely, the insurer assumes this lag in its business process. They either start the process of the rate filing earlier, or they take a slightly higher rate increase to account for the lag. (If my rate filing will take three months of regulatory approval time, for example, I will build in three months worth of inflation into my calculation indicating how much rate to take.) There is also labor on the regulator side. Someone has to pay for the staff or the state's department of insurance, to keep the lights on and to keep the building heated and cooled. This may be paid for with insurance taxes, or it may come from a general state revenue. Either way it comes out of the pockets of insurance customers. And what do they get for all this? At best, maybe some insurers get a 10% lower bill, but at the cost of someone else paying 10% more. Regulation doesn't result in overall lower insurance costs. It just means that some customers pay slightly more and some others slightly less. If a state DOI managed to truly hold down overall prices in their state, insurers would start to exit that state's insurance market.

For an example of insurers exiting the market completely, see the Florida market for homeowner's insurance. Most of the cost of Florida homeowners insurance is due to infrequent but catastrophic hurricanes and other tropical storms. Historical losses will not be truly indicative of future expected losses, so insurers need to use simulations to estimate their actual exposure to hurricane risk. Computer simulations of thousands of storms are run, and the resulting damage to existing homes is estimated based on these simulated storms. The Florida Office of Insurance Regulation is extremely picky about what what kind of hurricane model you can use. The regulation of these models is so onerous as to be punitive. Florida's regulation of hurricane models is an example of regulators being relatively sophisticated but still not adding any value to the insurance market. (Well, adding negative value, in that they've driven insurers out of the state.)

I try to view this all charitably. Maybe even though every action taken by regulators looks like a waste of time and resources, market discipline would totally collapse without them? The marginal action of a regulator looks silly, but maybe the overall effect of regulation is a positive one? It could be, but I find this hard to swallow. There is fierce competition in the market for personal lines insurance. You can get dozens, even hundreds, of quotes if you only have the time to shop around. There are thousands of insurers. It is a very thick marketplace. Some insurers will advertise their financial strength, others will give you a lower price because they lack the reputation of major industry players. Some will sell based on strong "customer service", while others will have no-frills service with a corresponding low-expense and lower premiums. Some will never deny a reasonable claim (thus costing more), and some will fight every marginal claim and even some reasonable ones (thus costing less). I don't think regulation has much of a role to play in such a thick market. Customers know they are taking a chance when they buy from a no-name insurance company with cheap premiums. They also know they can find a better price if they shop around a little. Most customers don't bother. They may complain about their insurance rate going up, but they can't be bothered with the minor annoyance of getting quotes from a few competitors. Oh, some certainly do. And insurers are paranoid about policyholder attrition. Insurers are often trigger-shy on taking the rate increases they need to, because even a necessary rate increase would threaten customer retention. They implicitly feel the discipline of the market when deciding how to set the price. They pour over competitor rates, customer retention statistics, and new customer acquisition numbers. The regulator adds no value to this process.

I don't think any of this is necessarily unique to insurance. I would imagine other industries have similar problems regarding regulatory incompetence and regulatory overreach (or perhaps forbearance). Fundamentally, government just doesn't have much to offer us in terms of market regulation.

Friday, October 6, 2017

Estimates of the Uninsured: Worse than Useless

Every time there is any movement to change health policy at the federal level, I hear estimates that “X million people will lose their insurance under the Republican plan” or that “Y million people gained insurance under Obamacare.” I think these are useless statistics. It’s not like being uninsured implies zero access to health care. People with no coverage and no assets get tons of free treatment all the time. If you’re homeless with no health insurance policy and no money but you go to the ER suffering a heart attack, you will get an angioplasty for free. Conversely, people in other developed nations with “universal healthcare” often have long waits to see a doctor. Often they want a treatment but are told “no.”  Also, as I’ve pointed out before, coverage status just doesn’t appear to correlate well with actual health outcomes. It’s not like those millions of people who got coverage under Obamacare suddenly got healthier. (Are there any empirical estimates of the effects of the ACA showing large, positive, unambiguous health effects? If so, please share.) Likewise it’s not likely that they’ll suddenly get sicker once they lose their so-called coverage. (Several examples of "uninsured" Americans consuming more healthcare than their Canadian neighbors here. If you know of a more systematic comparison of this type, please share.)

I’d like to see something more meaningful than a count (really an estimate) of how many Americans “gain” or “lose” coverage under some health policy proposal. I’d rather see an estimate of wait-times, perhaps broken down by covered versus not-covered. Or an estimate of the likelihood that someone will be treated, or receive some particular treatment. “X million Americans will see their wait-times for an office visit drop by Z-percent.” Or “X million Americans will get Y-percent more MRIs and Z-percent more mammograms.” Ideally this could be turned into a mortality rate estimate, and the estimate could be measured against the actual observed mortality change after the policy passes. The effect of health policy on health outcomes is, after all, an empirical question. We should ultimately have some objective means of deciding whether the policy succeeded or not.

I’m a bit tired of hearing claims that some Republican tweak to the ACA is going to plunge millions of Americans into Dickensian poverty and illness. Not that I’m defending the Republicans or any particular proposal they’ve put forth. (If I were to put forth my own proposal, it would be far more radical and go a lot further than anything the GOP has proposed.) Rather I just don’t think that health policy has that strong an effect on actual health outcomes. 

Wednesday, October 4, 2017

A Simple Value-Neutral Model of Rising Income Inequality

Suppose that the range of options has expanded in both directions. There are more ways to make a lot of money, and there are more ways to live comfortably without earning much or without earning anything at all. Next, suppose that people vary in their preferences. Some prefer more income with less leisure, and some prefer more leisure with less income. Think about what happens to naively-measured “income inequality” in this world.

I’m nearly certain both conditions in the above paragraph are true. Incomes (conditional on working) have risen, and it’s easier and much more common these days to be a “live-in-your-parents’-basement-playing-video-games” man-child. I don’t think that the corporate lawyer and the under-employed man-child were cast into their roles by a cosmic role of the dice. People choose their professions in large part based on their preference for the leisure/income trade-off.

If annual income is the metric on which we’re to measure “inequality” (and it’s a phenomenally bad one), then we should expect it to increase as the world gets richer and more prosperous. If we picked a more relevant measure of economic well-being (like consumption, while perhaps monetizing leisure to put it on the same level as other forms of consumption), we’d see that the world is much more equal. 

I don't have a ton of data to bring to bear on this simple model. I have read that when you measure the activity of unemployed men of prime working age, they are spending a lot of time playing video games (citation needed). Anecdotally, I know a lot of people who could have earned more but deliberately chose not to. They picked a b.s. (lower-case) major in college, or they picked a decent career path but weren't "gunner" enough about it, or they finished their undergraduate degree but decided at the last minute not to go on to law school. As the title says, my explanation is value-neutral. I'm not judging these people for not working harder and I'm not going to insist that they all made mistakes (though I suspect that some of them didn't act in their own long-term self-interest).

Now think for a moment who is likely to attribute their success mostly to chance versus mostly to effort. Think about who will be more apt to notice and remember obstacles to their success. Who is more likely to rationalize bad decisions? I'm guessing that lower-income, lower-status folks are more likely to perceive (imagine?) barriers to their success. 

Really Bad Arguments Against Repealing Drug Prohibition

This will not be a comprehensive argument in favor of drug legalization, just a list of really bad whoppers I have heard and my responses to them.

“There will be a huge surge in drug use.”

This is the most obvious objection, and it’s wrong for a number of reasons. In historical cases where the legal status of a drug has been changed, you just don’t see that large a demand response. In the United States most recreational drugs have been illegal for a very long time, so it's hard to say what demand was "before" and "after." But use rates have failed to respond to massive shifts in drug enforcement efforts. Also, massive changes in use rates of any particular drug have fluctuated wildly despite there not being any change in enforcement effort. In other words, neither the legal status nor the intensity of enforcement appears to affect usage rates by much. (The empirical evidence for this is fully fleshed out in Jeffrey Miron's Drug War Crimes and also in Matthew Robinson's Lies, Damned Lies, and Drug War Statistics. I'll stop there, because I don't want to list every book on my "drug policy" shelf.)

I think the people who say this are implicitly assuming that the only thing holding people back from drug use is the legal status of the drug, which is a very absurd assumption once you say it out loud. The main thing keeping people away from dangerous drugs are the inherent risks of addiction, social dysfunction, drug-related health problems, and overdose. People who are willing to endure these risks are not much affected by adding legal risks on top of these. The people who want do use these substances are already doing them. It is absurd to think that people are undeterred by the pharmacological risks of drug use but then respond strongly to the legal risks of drug use. (Remove the words "pharmacological" and "legal" from that sentence to see the absurdity. To make drug prohibition sound like a good idea, someone has to actually square this circle.) There isn’t an enormous pent-up demand that will surge forth if the dam of drug prohibition bursts.

“Bad guys will just find something else to do.”

I first heard this one at a debate on drug legalization at my undergraduate university, and I’ve heard it a few times since. This is the kind of thing that people can only say if they have not incorporated any economics into their worldview. Proponents of drug legalization often argue that much of the violence in society is due to black market crime. (Again, see Drug War Crimes, which has an entire chapter devoted to this topic.) Drug dealers killing each other over territory, killing witnesses, killing or beating subordinates, drug users retaliating against a dealer who ripped them off, etc. There really is quite a lot of this kind of violence. It makes up a significant fraction of total murders and assaults. This becomes very clear if you look at countries like Mexico or Columbia, where the violence is almost noticeable in everyday life. It exists in the United States, too, even if to a lesser degree.  

When you make something illegal, you don’t actually stop people from producing and selling it. All you do is ensure that the most violent individuals will be in charge of production and distribution. Simply put, there are more bad guys in the world because drug prohibition has made it more lucrative to be a bad-guy. The proponents of this argument are making some kind of daffy assumption that there is a fixed number of wrong-doers, regardless of the relative costs or rewards to being a wrong-doer. Most of these people are “law-and-order” types who love heavy criminal penalties, so it is truly stunning to hear them argue that the bad guys don’t actually respond to incentives.
To anyone who is committed to this viewpoint, we legalizers happily accept your surrender. If, by your own admission, bad guys will do bad regardless of the rewards or penalties they face, legalization is a no-brainer.

I suspect that this argument is simply an ad hoc attempt to deny one of the major benefits of drug legalization, given that it’s (usually) contrary to the speaker’s actual worldview. It’s the kind of argument you get when people try to “wrack up bullet-points” rather than actually think about what they are saying.

“Drug prices won’t fall much, so you’ll still have all the economic crimes by drug users trying to finance their habit.”

I heard this one recently, and it’s new to me. It’s another ad hoc attempt to dismiss an argument in favor of drug legalization, but in fact someone who takes this position seriously is actually making an incredibly strong case for legalizing drugs. The whole purpose of drug prohibition is to make drugs so expensive (in monetary and other costs) that people stop using them. If the drug warriors are ready to admit failure on this front, once again I’d happily accept their surrender. I don’t understand how someone could still favor drug prohibition after insisting that prohibition has failed to achieve its one true objective. Nevertheless, I have heard this claim more than once, and by people who put drug "offenders" in prison. Legalizers like me sometimes make the argument that if drug prices are allowed to fall to their true market value, there will be far less property crime from addicts trying to support a habit. These people can find real jobs and live lives with normal schedules, rather than constantly seeking their next fix and stealing or "hustling" to finance it. I view the "drugs won't get cheaper" argument as a pathetic attempt to deny this benefit. 

In actual fact, drug prohibition has increased the market price of drugs. The black-market markup has been exaggerated by some writers; it’s not in the “factor of 100” range that you sometimes hear. In “The Effect of Drug Prohibition on Drug Prices: Evidence from the Markets for Cocaine and Heroin”, Jeffrey Miron concludes that the black market price of cocaine is 2-4 times the legal price and heroin is 6-19 times the legal price. Not exactly a “factor of 100” (an extreme claim that Jeffrey Miron is attempting to tone down) but still a significant financial relief for the severe addicts who spend most of their resources feeding an expensive habit.

“Drug laws are a good way to arrest real criminals when those crimes are hard to prove.”

This one is shocking to the conscience. It is pretty disturbing to hear law-and-order types suggest that drug laws allow an end-run around the constitution, and that this is a feature rather than a bug. I’m sure they have a point. If you “know” someone is a criminal, it’s probably easier to pat them down and find a baggy of drugs than to actually discover evidence of a real crime. That being said, I’m always disturbed by the confidence that law enforcement types have in their own estimates of who is or isn’t guilty. 

I dearly hope that proponents of this argument aren’t actually saying that we should make something arbitrarily illegal just so the police and prosecutors can arrest and imprison whoever they want to. I suspect this is just a throw-away, “Oh, by the way…” kind of argument. Perhaps it doesn’t, on its own, support the policy of drug prohibition, but is in some sense a mitigating factor to an otherwise bad policy. I don’t approve of this viewpoint at all. In fact, I think that too many resources are diverted from policing real crimes to policing drug crimes, and that’s part of the reason for social decay in some neighborhoods. If not for drug prohibition, there wouldn’t be so many missing young men spending time in prison, there wouldn’t be as many shattered families, and there wouldn’t be so much distrust of the police. Under those circumstances, maybe the communities could actually forge some kind of relationship with the police, and real crimes would actually get solved because of the resulting cooperation.

That's it for now. I hate to do these "fish-in-a-barrel" responses to really stupid things that I've heard. I like Scott Alexander's concept of steel-manning an argument, as in "making the argument under scrutiny as strong as possible, even if the person delivering it wasn't very articulate or reasonable." But I've heard these silly claims so I might as well respond to them and say why they're wrong. I plan to eventually do a long round-up post that unifies arguments in favor of drug legalization made in several earlier posts. 

Sunday, October 1, 2017

Welcome New Readers!

A recent post of mine got picked up by Scott Alexander in a link roundup. I was astonished to see the amount of traffic that came to my blog via that one link. I shudder to think what an entire Slate Star Codex post dedicated to the the topic might have done. I rarely get comments, but I got a few on the post. And I could tell that people were skimming my older posts, and even commenting on a few. Lurkers are of course perfectly welcome, but I appreciate any feedback I can get. I want to welcome new readers I've picked up in the past week or so.

I'm sure curious readers have perused my previous posts. If you're reading this on a computer or tablet, you should see my most-read posts on the right-hand side. I have a large number of posts arguing against drug prohibition starting around February 2016. I have a couple of posts about what thoughtful comments do and don't look like, here and here. I have a few scattered posts about so-called "inequality", how health insurance should work, and "moral outrage" as a debating tactic (one that I am finding increasingly obnoxious).

A few things I noticed.

Most people don't read all that carefully. That post, which attempts to debunk the standard narrative of the opioid epidemic, had at least a dozen links to prior posts by me which contained supporting information. Fewer than 10% of readers clicked on any of those. I'd hope that a larger share of readers would think, "Huh, is that really true? Why does he think that's true? Oh, there's a link arguing that this is true." Of course, many of those links were to places other than by own blog, and maybe people were scrupulously checking the various government documents and other articles I linked to. I promise that I'm doing my best and will never deliberately bend the truth, but I also sincerely hope nobody ever simply takes my word for anything I claim on this blog.

Some of the comments I got were great. And some were terrible. I made a couple of edits on my post after reading those comments (some here and some at SSC). One was to correct an error (one that I thought was not material, even to the very narrow argument in that particular paragraph). One was to clarify something that was not an error. (I called meditation "basically a placebo treatment", which should not be construed to mean I think meditation isn't effective for pain management. Just that I have an expansive definition of "placebo." After all, imagine doing an experiment where one group gets "real" meditation and the other gets "placebo" meditation as the treatment...) One thing I didn't care for was how easily people will conclude that you're deliberately lying. One comment, if I'm reading it correctly, implied that I was "lying about" a statistic cited in the Vox paper. Another implied that one of my claims was "dishonest." Is this how the rationalist community points out mistakes, and even disagreements that can't really be called "mistakes"? Mostly not, but it was a little bit grating to get this treatment over immaterial details. To say that somebody is "lying" implies something about their motives, which usually the accuser doesn't know. Anyway, the good comments outweighed the bad ones, and even the bad ones forced me to think harder about my arguments. (Bad commenters sometimes improve your understanding in the same way as a small child who keeps asking "Why?" to each successive answer.)

There were some excellent comments at Slate Star Codex about how people are actually using opioids. Consider it a small, random sample, but it's still illuminating. Considering the examples given (a broken arm, skin scraped to the bone, oral surgery), I'm very glad these people got powerful painkillers. I really hope that Vox does not have the effect on health policy that it wants to have, which would probably deny a few of these acute pain sufferers the relief they seek.

Free Medicine Doesn't Make People Healthier

This is from Free For All? Lessons from the RAND Health Insurance Experiment by Joseph Newhouse. It's not exactly a page-turner. It's more of an eat-your-vegetables kind of book. I've been thumbing through it recently. I am familiar with the conclusions (which I'll share below) because of the classic article Cut Medicine In Half by Robin Hanson. That piece was the lead essay in a Cato Unbound forum. I had thought that maybe Hanson drew some weird contrarian conclusions from the study. Indeed three other health policy wonks disagreed with him (err...without actually disagreeing with him; you'll have to see what they say and how they fail to meaningfully respond to Hanson).  Not contrarian at all, actually. Hanson was pretty much drawing the most straightforward possible conclusion from the RAND study. This slays some political sacred cows, but people should face the information with their eyes wide open. They shouldn't be engaging in casuistry to avoid the obvious. It's fine to speculate that "The effect of free medicine is clinically important, but it's hard to see in small datasets because of 'statistical significance' issues." But people who take such positions should admit that they are speculating beyond a straightforward interpretation of the best data we have on this question.

 Here's the relevant part (starting on page 201; emphasis mine):
For the average person there were no substantial benefits from free care (Table 6.6). There are beneficial effects for blood pressure and corrected vision only; ignoring the issue of multiple comparisons, we can reject at the 5 percent level the hypothesis that these two effects arose by chance, but we do not believe the caveat about multiple comparisons to be important in this case. We investigate below the mechanisms by which these differences might have arisen; the results from these further analyses strongly suggest that the results did not occur by chance.
For most health status measures the difference between the means for those enrolled in the free plan and those enrolled in the cost-sharing plan did not differ at conventional levels. Many of these conditions are rather rare, however, raising the possibility that free care might have had an undetected beneficial effect on several of them. To determine whether this was the case we conducted an omnibus test, the results of which make it unlikely that free care had any beneficial effect on several conditions as a group that we failed to detect when we considered the conditions one at a time. 
If the various conditions are independent and if free care were, for example, one standard error better than cost sharing for each measure, then of the 23 psychologic measures in Table 6.6 we would expect to see four measures significantly better on the free plan (at the 5 percent level using a two-tailed test), and none significantly worse. Among the insignificant comparisons, 15 would favor free care and only 4 would favor cost sharing. In fact three measures are significantly better on the free plan and none is significantly worse, but 13 of the 23 measures rather than the predicted 4 favor the cost-sharing plan. Hence it is very unlikely that free care causes one standard error of difference in each measure. If the independence assumption is violated, the violation is probably in the direction of positive dependence, in which case accounting for such dependencies would only strengthen our conclusion. Moreover, one standard error of difference is not a very large difference -- about half of the 95 percent confidence interval shown in the fourth column of Table 6 (equal, for example, to one milligram per deciliter of cholesterol). 
The same qualitative conclusions hold for persons at elevated risk (table 6.7). In this group, those on the free plan had nominally significantly higher hemoglobin but worse hearing in the left ear. Again outcomes on 13 of 23 measures favored cost sharing.

Staring at the top of page 204:
Hypertension and vision. Further examination shows that the improvements for hypertension and far vision are concentrated among those low-income enrollees at elevated risk (Table 6.8). Indeed, there was virtually no difference in diastolic blood pressure readings across the plans for those at elevated risk who were in the upper 40 percent of the income distribution. 
Because the low-income elevated risk group is small (usually between 5 and 10 percent of the original sample depending on the health status measure), the outcome differences for that group between the free and cost-sharing groups have relatively large standard errors. These results might be taken to mean that we missed beneficial effects for the low-income, elevated risk group for certain measures. But although this might be the case for a small number of measures, it is unlikely to be generally true. If we apply the same omnibus test just described to the low- and high-income groups shown in Table 6.8, we would expect that if there were a true one standard error favorable difference for the free plan for each measure, 2 of the 13 comparisons in Table 6.8 would be significantly positive and 2 would be negative, but none would be significantly negative. Of the 9 that would be insignificantly positive at the 5 percent level, 6 would have values of significance between 5 and 20 percent. The data in Table 6.8 show that for the low-income group, none (rather than 2) of the 13 comparisons is significantly positive at the 5 percent level; 4 (rather than 6) are significant at the 20 percent level; and 4 (rather than 2) are negative, one (acne) significantly so. For the high-income group, 7 of the 13 results favor the free-care plan, and the results are even "less significant" than one would expect at random (that is, one would have expected 2 or 3 differences "significant" at the 20 percent level among 13 comparisons, even if there were no true differences, whereas only one comparison was significant at this level).
Sorry, you'll need to get the book to see the actual charts. (I typed this while looking at my copy of the book and double-checked it. I sincerely apologize if I mistyped something, but on a double-check what I type matches what's in my book.) I like this concept of an "omnibus test." Note that the question isn't exactly "What dimensions of health improve when we give people free medicine," but rather a much more modest "Does free medicine improve health at all?" I like this exercise of saying, "What would I expect to see if free medicine had a significant effect on health?", comparing that to the observation, and concluding "What we predicted did not match what we observed." Keep in mind that the people with free care consumed something like 30-40% more medicine, apparently to no effect.

There is much more in the book, all in a similar vein. Giving people free medicine, even at-risk, low-income people, doesn't seem to make them any healthier. If someone want to take issue because the sample size is too small, I will join them in asking the RAND study to be redone with a much larger sample size. I won't stand for someone insisting that no data whatsoever, however carefully collected, can ever have policy implications that they don't approve of. That seems to be most of what I get from the popular media. Whenever there is a proposal to change health policy, there is a lot of shrill doom-saying by the proponents of socialized medicine. They speak as if any reductions made to the medical welfare state represent a lethal threat to people in poverty. I get the sense that they don't even realize they're making empirical claims. Well, we have the RAND study, and more recently the Oregon Medicaid Experiment. We have two randomized controlled experiments demonstrating that free medicine just doesn't seem to have health benefits, and we have tons of observational studies coming to the same conclusion.