Sunday, October 1, 2017

Free Medicine Doesn't Make People Healthier

This is from Free For All? Lessons from the RAND Health Insurance Experiment by Joseph Newhouse. It's not exactly a page-turner. It's more of an eat-your-vegetables kind of book. I've been thumbing through it recently. I am familiar with the conclusions (which I'll share below) because of the classic article Cut Medicine In Half by Robin Hanson. That piece was the lead essay in a Cato Unbound forum. I had thought that maybe Hanson drew some weird contrarian conclusions from the study. Indeed three other health policy wonks disagreed with him (err...without actually disagreeing with him; you'll have to see what they say and how they fail to meaningfully respond to Hanson).  Not contrarian at all, actually. Hanson was pretty much drawing the most straightforward possible conclusion from the RAND study. This slays some political sacred cows, but people should face the information with their eyes wide open. They shouldn't be engaging in casuistry to avoid the obvious. It's fine to speculate that "The effect of free medicine is clinically important, but it's hard to see in small datasets because of 'statistical significance' issues." But people who take such positions should admit that they are speculating beyond a straightforward interpretation of the best data we have on this question.

 Here's the relevant part (starting on page 201; emphasis mine):
For the average person there were no substantial benefits from free care (Table 6.6). There are beneficial effects for blood pressure and corrected vision only; ignoring the issue of multiple comparisons, we can reject at the 5 percent level the hypothesis that these two effects arose by chance, but we do not believe the caveat about multiple comparisons to be important in this case. We investigate below the mechanisms by which these differences might have arisen; the results from these further analyses strongly suggest that the results did not occur by chance.
For most health status measures the difference between the means for those enrolled in the free plan and those enrolled in the cost-sharing plan did not differ at conventional levels. Many of these conditions are rather rare, however, raising the possibility that free care might have had an undetected beneficial effect on several of them. To determine whether this was the case we conducted an omnibus test, the results of which make it unlikely that free care had any beneficial effect on several conditions as a group that we failed to detect when we considered the conditions one at a time. 
If the various conditions are independent and if free care were, for example, one standard error better than cost sharing for each measure, then of the 23 psychologic measures in Table 6.6 we would expect to see four measures significantly better on the free plan (at the 5 percent level using a two-tailed test), and none significantly worse. Among the insignificant comparisons, 15 would favor free care and only 4 would favor cost sharing. In fact three measures are significantly better on the free plan and none is significantly worse, but 13 of the 23 measures rather than the predicted 4 favor the cost-sharing plan. Hence it is very unlikely that free care causes one standard error of difference in each measure. If the independence assumption is violated, the violation is probably in the direction of positive dependence, in which case accounting for such dependencies would only strengthen our conclusion. Moreover, one standard error of difference is not a very large difference -- about half of the 95 percent confidence interval shown in the fourth column of Table 6 (equal, for example, to one milligram per deciliter of cholesterol). 
The same qualitative conclusions hold for persons at elevated risk (table 6.7). In this group, those on the free plan had nominally significantly higher hemoglobin but worse hearing in the left ear. Again outcomes on 13 of 23 measures favored cost sharing.

Staring at the top of page 204:
Hypertension and vision. Further examination shows that the improvements for hypertension and far vision are concentrated among those low-income enrollees at elevated risk (Table 6.8). Indeed, there was virtually no difference in diastolic blood pressure readings across the plans for those at elevated risk who were in the upper 40 percent of the income distribution. 
Because the low-income elevated risk group is small (usually between 5 and 10 percent of the original sample depending on the health status measure), the outcome differences for that group between the free and cost-sharing groups have relatively large standard errors. These results might be taken to mean that we missed beneficial effects for the low-income, elevated risk group for certain measures. But although this might be the case for a small number of measures, it is unlikely to be generally true. If we apply the same omnibus test just described to the low- and high-income groups shown in Table 6.8, we would expect that if there were a true one standard error favorable difference for the free plan for each measure, 2 of the 13 comparisons in Table 6.8 would be significantly positive and 2 would be negative, but none would be significantly negative. Of the 9 that would be insignificantly positive at the 5 percent level, 6 would have values of significance between 5 and 20 percent. The data in Table 6.8 show that for the low-income group, none (rather than 2) of the 13 comparisons is significantly positive at the 5 percent level; 4 (rather than 6) are significant at the 20 percent level; and 4 (rather than 2) are negative, one (acne) significantly so. For the high-income group, 7 of the 13 results favor the free-care plan, and the results are even "less significant" than one would expect at random (that is, one would have expected 2 or 3 differences "significant" at the 20 percent level among 13 comparisons, even if there were no true differences, whereas only one comparison was significant at this level).
Sorry, you'll need to get the book to see the actual charts. (I typed this while looking at my copy of the book and double-checked it. I sincerely apologize if I mistyped something, but on a double-check what I type matches what's in my book.) I like this concept of an "omnibus test." Note that the question isn't exactly "What dimensions of health improve when we give people free medicine," but rather a much more modest "Does free medicine improve health at all?" I like this exercise of saying, "What would I expect to see if free medicine had a significant effect on health?", comparing that to the observation, and concluding "What we predicted did not match what we observed." Keep in mind that the people with free care consumed something like 30-40% more medicine, apparently to no effect.

There is much more in the book, all in a similar vein. Giving people free medicine, even at-risk, low-income people, doesn't seem to make them any healthier. If someone want to take issue because the sample size is too small, I will join them in asking the RAND study to be redone with a much larger sample size. I won't stand for someone insisting that no data whatsoever, however carefully collected, can ever have policy implications that they don't approve of. That seems to be most of what I get from the popular media. Whenever there is a proposal to change health policy, there is a lot of shrill doom-saying by the proponents of socialized medicine. They speak as if any reductions made to the medical welfare state represent a lethal threat to people in poverty. I get the sense that they don't even realize they're making empirical claims. Well, we have the RAND study, and more recently the Oregon Medicaid Experiment. We have two randomized controlled experiments demonstrating that free medicine just doesn't seem to have health benefits, and we have tons of observational studies coming to the same conclusion.

No comments:

Post a Comment