Tuesday, October 24, 2017

It's Really Hard to Improve Things

I really love this piece by David Friedman.
Suppose you are designing a race car; further suppose that you are very good at designing race cars, and so get everything right. You face a variety of tradeoffs. A larger engine will increase the car's power to accelerate, it will allow it to better overcome wind resistance—but it will also weigh more and require a larger gas tank, which will increase the car's mass, reducing the gain in acceleration and possibly making the car more likely to burst its tires or skid out on a turn. Similarly with the size and shape of tires, width of the wheel base, and a variety of other features. 
Your car is designed, built, and it and its close imitators are winning races. A critic points out that you obviously have it wrong; the engine should have been bigger. To prove his point, he builds a car that is just like yours save that the engine is half again as large. Testing it on the straightaway, he demonstrates that it indeed has better acceleration than your car. He enters it in a race against your car—and loses.
The piece is not about a race car. It is more generally about how hard it is to improve things. Something you find in the world has probably undergone some sort of  optimization process, which would tend to punish deviations from the global optimum. Sure, you can make some marginal improvements here and there if you think very carefully about them. But you are unlikely to improve things by tearing out everything by the roots and starting over. You have to be extremely careful even about asserting you've made a marginal improvement. Are you just measuring one thing? Or are you measuring (and optimizing) across all the dimensions of interest? Perhaps there are some you haven't thought of.

I recently finished reading Uncontrolled by Jim Manzi for the second time. (Good Econtalk interview here.) It's a rewarding book. It's on the same topic as David Friedman's post: trying to make things better is really hard. You often don't have the information necessary to make improvements. Will it increase your company's profitability to change your name from "Fast Mart" to "Quick Mart"? Maybe you own a thousand stores, half called "Fast Mart" and half "Quick Mart." Can you figure this out by comparing the average profitability of both groups? Of course not. Maybe you called your inner-city stores "Fast Mart" and your rural stores "Quick Mart" so they aren't really comparable. What if you run a regression, controlling for all relevant features that might drive profitability? This might be slightly more relevant than a pure comparison of averages, but you still don't know if there are hidden variables or things you didn't think of or perhaps can't even measure. You have to do an experiment. You have to randomly change the names of a few stores, enough that you get a true statistical signal of improved profitability that can't be attributable to noise. (Manzi actually uses the Quick vs Fast Mart example in his book.)

Experimentation is the only way to truly establish causality. And it is generally a low-yield process. Many modern companies are doing dozens, hundreds, even thousands of experiments. It's called A/B testing: one randomly selected group of consumers gets treatment A, the other gets B. Is there a difference? If so, you can be fairly confident that the thing you were testing caused the difference. It can be something simple, like changing the color of a webpage, or making a follow-up customer service call. This is all in order to eke out tiny marginal improvement in one small metric of success. Do enough of these marginal improvements on a routine basis and you might just barely stay ahead of your competitors. But rarely are you going to make one grand decision that will double your profits. (A few rare exceptions come to mind. I think the iPhone really was a product of grand designs, not a bunch of tweaks at the margin. These game-changers are rare beasts, at any rate.)

Now take government. Government does not do this kind of careful tweaking at the margins. In fact, government programs almost don't respond to feedback at all. They are seldom rolled out in a careful experimental "test shot" manner, such that the effectiveness can be assessed and the program adjusted or ended. Even after blatant, severe policy failures, the initial supporters almost never admit to mistakes. The politicians who implemented those policy failures almost never call for repeal. And yet the sweeping changes implemented by government are huge in comparison to the marginal tweaks made by private businesses. They tend to apply not just to the customers of a single company, but to the entire nation (or state or city) all at once. I find this to be unbelievably irresponsible. Government should start incredibly small and make incremental adjustments, always being prepared to declare failure and abort the experiment. Instead we get, "Let's overhaul from the ground-up an entire industry!" Or, "Let's ban this entire class of substances!" Or, "Let's spend a trillion dollars to 'stimulate the economy', based on widely disputed economic models!" We should expect from the start for this to go very wrong. At least governments face budget constraints. The generosity of taxpayers is limited, thank goodness. (I know, "generosity" is the wrong word here.) And after all you can't spend all of the government's revenue on every single social problem. Still, programs grow to absurd proportions. They persist long after their failure has been demonstrated beyond a reasonable doubt.

I can think of a few exceptions. The failure of alcohol prohibition in the 20s was so great and so obvious to the casual observer that even most supporters did an about-face. (Then again, drug prohibition persisted for a century despite having all the same horrific consequences of alcohol prohibition.) Many people who endured Nixon's peacetime wage and price controls could see the damage those policies caused in real time. Then again many failed to learn that lesson, and I suspect this generation might "forget" those historical lessons and have to re-learn them from scratch. Government failures sometimes get corrected, but it's rare and it takes far too long.

Think for a moment about making even small tweaks to the lives of individuals. Say you have the power to change things any time you think you know better. You look at someone's shopping cart and swap one item for another. You look at someone's daily commute and say, "Don't take that route, take this one." You look at how someone spends their free time and say, "Stop playing those video games and read this book instead." What are the odds that you can actually improve their life? You'd have to have tons of information about that person's preferences. Unlikely, considering the best information about their preferences is "the stuff they are already doing!" You'd also require tons of information about the possible alternatives to what that person is actually choosing. And you'd have to know how one decision interacts with others. Maybe the thing you swapped out of their shopping cart was an ingredient to a casserole, and that's not even obvious from looking at the shopping cart because the shopper already has everything else at home. Maybe your proposed commute route isn't as scenic. Maybe he drives by the store on his way home from work because doing so forces him to remember that he needs to buy groceries. He has figured out that if he takes any other route he'll forget. Your new route forces this person to spend more time on the road, because he now has to make special trips. Maybe the gamer has learned that he just hates reading books, but he is listening to audiobooks during the day and voraciously devouring free content on the internet. Maybe you are the philistine, both for dismissing his perfectly enriching hobby and for dismissing his other forms of information consumption.

I think this is hard even when you have a lot of information about the person whose life you are adjusting. Now think about how government works. A blunt-force instrument is applied to everyone at once. This is likely to go wrong. Be it a ban or a mandate, a tax or a subsidy, any single "tweak" applied to the entire population is likely to be ill-suited for a large proportion of those affected. When politicians see market institutions they don't like (say, payday lending) and seek to eliminate them, they are likely to get this wrong. When politicians see a market price they don't like and insist it must be much lower (pharmaceuticals) or much higher (wages of hourly workers), they are likely to get this wrong. There are reasons for that much-reviled industry, which seems to attract customers despite the objections of third parties. And there are reasons for prices to be that high and wages to be that low. These things come out of an optimization process that nobody quite understands. It's possible that a tweak here or there can cause a massive improvement, but it's unlikely. We should have a strong prior against "simple tweak causing massive improvement that for some reason just hasn't happened yet." A massive push is less likely still to be an improvement.

Arnold Kling captures the idea well in this podcast from a few years back. Here is my attempt to make Kling's point in my own words:
Imagine a CEO of a major company. Maybe they're an auto manufacturer, and the CEO says, "Now we're going to branch out and acquire some other businesses. We're going into shoe manufacturing, paper, computers, textiles..." Obviously this would be a big mess. It would be a hugely wasteful venture from the point of view of society. A lot of the stakeholders would lose out: shareholders would lose value, employees would lose their jobs, soon-do-be-disappointed customers would maybe experiment with these new products of questionable quality from this business of ever-expanding scope. At least in this case, though, the discipline of the market stops it from growing out of control. Budgetary constraints and discipline from shareholders and customers who ultimately say "No" will put a stop to it. But take away those constraints and this wasteful monstrosity grows without bounds. That's my model of government. 
(Once again, NOT a quote. That's my own paraphrase of Kling's point.) This is a deep point about scope and scale. It may be desirable to grow in either dimension, but always carefully, always after doing your due diligence, and always with a release valve (or is it an escape hatch?) for when things go wrong.

___________________________________________________________

This is not a general argument for the status quo. Many "status quo" things are bad government policies that we've been stuck with. If we can identify government programs that failed to have any measurable effect on the targeted social problems, we should end those, some immediately some by a scheduled phase-out. Uncontrolled makes the case for "status quo with gentle phase-outs for stuff that probably isn't working." I'm far more cavalier about saying, "Let's end government programs if they are morally outrageous, or if simple economic analysis implies they are of dubious value." I take the position that government should start close to zero and slowly adjust upward, rather than that it should start where it is and slowly adjust downward. But I'd gladly take the latter if it becomes an option.

No comments:

Post a Comment