Saturday, April 29, 2017

Term Life Plus Health Insurance Package: An Insurance Product That Should Exist

Here is my idea for an insurance product that needs to exist, but is precluded by a bad regulatory regime.

Suppose you have a combined term health and life insurance package (possibly with disability coverage thrown in the mix). The term is in decades, say 20 or 30 years, and very hard for either party to cancel. Your insurer gets locked into a certain rate based on your current health status, but also the insurance customer can’t simply jump ship every time s/he finds a cheaper rate. This way, your insurer knows that they are on the hook for your health expenses for the long term. You aren’t going to leave in a few years when you switch jobs, or move to another state, or simply decide to switch carriers because you can save a few percentage points on premiums. If a 30-year hard-to-cancel contract sounds stiflingly authoritarian to you, just realize that the 30-year mortgage and 30-year term-life policies are pretty common (though the latter can be canceled by the insured pretty much at will). As I explain here and here, such policies are perfectly feasible and fundamentally affordable. The “life” portion of the plan covers your family in the event of your untimely death, and the “health” portion covers catastrophic medical expenses. But given this setup, your insurer may voluntarily pay for things other than what it is minimally contractually obliged to pay for, which I’ll explain in a minute.

This setup gets all the incentives right. Your insurer won’t be skimping on medical expenses unless they have a good reason. If a routine blood screening or annual well-checkup or referral to a specialist represents a net savings in health costs, the insurer will pay for it. From the insurer’s perspective, they hold a portfolio of future assets and liabilities. If more people stay alive and healthy, they will keep paying their premiums. If more people get sick or die, then the insurer will lose more of the future premium payments and incur larger future costs (payouts for medical expenses and death benefits). Insurers will be looking very closely at what kinds of preventive care and treatments are effective. Any preventative care with a positive expected return (representing a net savings) will likely be approved. They are still on the hook for the catastrophic expenses if something goes horribly wrong, mind you. They can’t simply deny your claim for cancer treatment just because it represents a negative investment return for them; they are contractually obligated to pay for these kinds of catastrophic expenses. But they will supplement these contractually-mandatory claims with low-cost preventative treatments that, while they aren’t obligated to pay for, it’s in their best interest to provide.

You can think of there being three “tiers” of preventative treatment. There is (1) preventative care that pays for itself by preventing future medical expenses. This kind of care is like an “investment good.” Even if you don’t particularly care about how healthy you will be in 20 years, you’ll shell out because the return on investment is good. Then there is (2) preventive care that has a positive effect on your future health, but doesn’t pay for itself. (It's a consumption good, but not quite an investment good.) So it’s lower value than (1) but still maybe worth buying. Your insurer (under my scheme) won’t necessarily shell out for these treatments, but they may send you a list of things you should be doing to maintain a healthy lifestyle. They’d prefer you to go out and buy (2) on your own, but it’s not worth actually footing the bill for you. Then there is (3) preventive care that has negligible health benefits, even negative health consequences. I’ve heard the PSA test for prostate cancer listed as an example of (3). Supposedly the test provides many false positives for each true positive. But it’s hard to separate “false” from “true”, so a lot of these guys get unnecessary treatments that lead to incontinence or impotence. Many die from complications of their surgery. And this is to treat a cancer that people can live with for a very long time without it causing any problems.  I’m not an expert on this, so I won’t hang my entire case on the PSA example. But suffice it to say you could come up with an example of “preventive care” that’s useless. Imagine monthly cancer screenings for a healthy 20-year-old. Or daily for that matter. There is a line somewhere that separates worthwhile preventive care from worthless. Your insurer will be trying to figure out, at a population level, what goes into categories (1), (2), and (3). They will strongly encourage (1), weakly nudge you toward (2), and ignore or perhaps even try to talk you out of (3). It's often implied that insurers simply want to deny every claim to save money, but clearly that won't be the case if some of those claims lead to lower future costs. With this setup, insurers will approve claims because it's in their own interest to do so.

I think there is great potential if this ever gets going. You will have insurers with massive 20+ million customer portfolios and tons of data on medical procedures and subsequent health outcomes. They will be employing state-of-the-art data-mining to separate good medicine from bad. They will employ econometricians to tease out causal inference, and separate the truly causal from the merely correlational relationships between treatment and outcomes. We will learn things about medicine that we don’t currently know. Pharmaceutical firms will have an incentive not just to pass the FDA’s bar for clinical trials, but to prove to insurers that their drugs have a real health benefit. We’ll see a new age of medical experimentation with tons of data. There will still be academic studies and clinical trials, but conceivably health/life insurers will have better datasets on larger samples. They’ll produce studies that aren’t quite as “controlled” as clinical trials, but may be more valuable nonetheless because of sheer data volume.

Unfortunately, this kind of product is illegal. Health “insurance” is required by law to cover a lot of low-value, low-expense, routine medicine that the patient should really be paying for out of their own pockets. An even bigger problem is that health insurance isn’t appropriately structured. People drag around their “pre-existing conditions” with them. The way it should work is that when you get hit with a big bad diagnosis, you get a big payout that’s calculated to cover the cost of that diagnosis (future cancer treatments, diabetes medicine, etc.). Even if health insurance doesn’t go to 30-year policy terms but renews year-to-year like auto insurance, the policy that was active the year of your diagnosis should cover all related expenses, just as your auto bodily injury coverage covers an accident that occurred the year you were covered (regardless of when a future surgery occurs or when the medical bills come due). We need to allow insurers to exclude pre-existing conditions, or otherwise charge the right premium for them or underwrite against them. This should be fine. It would not, as so many people assume, leave a bunch of really sick people without coverage; if health insurance simply paid out the way other forms of real insurance do, these people will be covered. (I explain this point in more detail in my two links above, repeated here.)


I hear clueless pundits all the time, rambling on about how we need to protect people with pre-existing conditions from ravenous insurers who will price-gouge them or exclude them entirely. I want to shake these people by the lapels and say, “It’s your fault we’re stuck in this situation, you fool!” These are invariably the same people who support health insurance as a mandatory employee benefit, support the tax exemption for employers who purchase such insurance, support tons of mandates for health insurance to cover everything, politicize and even gender-bait the coverage of certain gender-specific provisions, and insist loudly that we can’t allow premium increases or coverage exclusions for pre-existing conditions. It’s a bad combination of these items that has led to very high insurance premiums, a stifled private market, and the transferring of liabilities from insurer to insurer. 

Thursday, April 27, 2017

Welcome Ricochet Readers

I’m seeing an explosion of traffic to my blog after my friend Mike posted my recent piece about publication bias in climate science to Ricochet. (Thanks Mike!) Welcome aboard, guys. I hope you enjoy my posts. I appreciate the extra eyes, and I saw plenty of thoughtful comments on the Ricochet post.
 
Some comments were about the blog itself (paraphrasing one complaint: What, no "follow by e-mail"?). I'll try to add these little improvements when I think of them. Anyway, I'm happy for the extra eyes as long as I don't end up attracting the dreaded "Eye of Sauron" (my favorite term for "attracting the collective outrage of the internet").

Saturday, April 22, 2017

Publication Bias In Climate Science

Some recent research I've been doing has lead to an interesting experience. I'm always frustrated at the way science is communicated to the public. This was another example of something that disappointed me.

I was trying to figure out if there is publication bias in climate science. More specifically, I was looking for a funnel plot for the "climate sensitivity," something that would quickly and graphically show that there is a bias toward publishing more extreme sensitivity values.

Climate sensitivity is the response of the Earth's average temperature to the concentration of CO2 in the atmosphere. The relationship is logarithmic, so a doubling of CO2 will cause an X-degree increase in average temperature. To increase it another X-degrees would require another doubling, and so on. Obviously there are diminishing returns here. It takes a lot of CO2 to keep increasing the Earth's temperature.

If we focus on just the contribution from CO2 and ignore feedback, this problem is perfectly tractable and has an answer that can be calculated by paper and pencil. In fact Arrhenius did so in the 19th century. (He even raved about how beneficial an increase in the Earth's temperature would be, but obviously many modern scientists disagree with his optimism.) A doubling of atmospheric carbon gets you a 1º Celsius increase in average temperature. The problem is that carbon is only part of the story. That temperature increase leads to there being more water vapor in the atmosphere, and water vapor is itself a very powerful greenhouse gas. So the contribution from water vapor amplifies the contribution from carbon, so the story goes. This doesn't go on forever in an infinite feedback, but "converges" to some value. There are other feedbacks, too, but my understanding is that water vapor is the dominant amplifier.

This is a live debate. Is the true climate sensitivity closer to 1º C per doubling of CO2, or 3º (a common answer), or 6º (an extreme scenario)? This is what I was looking for: a funnel plot of published estimates for the climate sensitivity would reveal publication bias.

I found this paper, titled "Publication Bias in Measuring Climate Sensitivity" by Reckova and Irsova, which appeared to answer my question. (This link should open a pdf of the full paper.)

Figure 2 from their paper shows an idealized funnel plot:


If all circles are actually represented in the relevant scientific literature, there is no publication bias. But if the white circles are missing, an obvious publication bias is present. The idea here is that for lower-precision estimates (with a higher standard error), you will get a big spread of estimates. But journal editors and perhaps the researchers themselves are only interested in effects above a certain size. (Say, only positive effects are interesting and negative effects are thrown out. Or perhaps only climate sensitivities above 3º per doubling of CO2 will ever see the light of day, while analyses finding smaller values will get shoved into a file drawer and never be published.) In fact, here is what the plot looked like for 48 estimates from 16 studies:



It looks like there is publication bias. You can tell from the graph that 1) low-precision low-sensitivity estimates (the lower-left part of the funnel) are missing and 2) the more precise estimates indicate a lower sensitivity. The paper actually builds a statistical model so that you don't have to rely on eye-balling it. The model gives an estimate of the "true" climate sensitivity, correcting for publication bias. From the paper: “After correction for publication bias, the best estimate assumes that the mean climate sensitivity equals 1.6 with a 95% confidence interval (1.246, 1.989).” And this is from a sample with a mean sensitivity of  3.27: “The estimates of climate sensitivity range from 0.7 to 10.4, with an average of 3.27.” So, at least within this sample of the climate literature, the climate sensitivity was being overstated by a factor of two. The corrected sensitivity is half the average of published estimates (again, from an admittedly small sample).

I read this and concluded that there was probably a publication bias in the climate literature and it probably overstates the amount of warming that's coming. Then I found another paper titled "No evidence of publication bias in climate change science." You can read the entire thing here.

My first impression here was, "Oh, Jeez, we have dueling studies now." Someone writes a paper with a sound methodology casting doubt on the more extreme warming scenarios. It might even be read as impugning the integrity or disinterestedness of the scientists in this field. Of course someone is going to come up with a "better" study and try to refute it, to show that there isn't any publication bias and that the higher estimates for climate sensitivity are more plausible. But I actually read this second paper in its entirety and I don't think that's what's happening. We don't have dueling studies here. Despite the title, the article actually does find evidence of publication bias, and it largely bolsters the argument of the first paper. Don't take my word for it. Here are a few excerpts from the paper itself:
Before Climategate, reported effect sizes were significantly larger in article abstracts than in the main body of articles, suggesting a systematic bias in how authors are communicating results in scientific articles: Large, significant effects were emphasized where readers are most likely to see them (in abstracts), whereas small or non-significant effects were more often found in the technical results sections where we presume they are less likely to be seen by the majority of readers, especially non-scientists.
 Sounds kind of "biased" to me.
Journals with an impact factor greater than 9 published significantly larger effect sizes than journals with an impact factor of less than 9 (Fig. 3). Regardless of the impact factor, journals reported significantly larger effect sizes in abstracts than in the main body of articles; however, the difference between mean effects in abstracts versus body of articles was greater for journals with higher impact factors.
So more prestigious journals report bigger effect sizes. This is consistent with the other study linked to above, the one claiming there is publication bias.

From the Discussion section of the paper:
Our meta-analysis did not find evidence of small, statistically non-significant results being under-reported in our sample of climate change articles. This result opposes findings by Michaels (2008) and Reckova and Irsova (2015), which both found publication bias in the global climate change literature, albeit with a smaller sample size for their meta-analysis and in other sub-disciplines of climate change science.
I found the framing here to be obnoxious and incredibly misleading. The Michael’s and the Reckova and Irsova paper (the later linked to above) both found significant publication bias in top journals, and the “No evidence of publication bias” paper found essentially the same thing. In fact, here is the very next part:
Michaels (2008) examined articles from Nature and Science exclusively, and therefore, his results were influenced strongly by the editorial position of these high impact factor journals with respect to reporting climate change issues. We believe that the results presented here have added value because we sampled a broader range of journals, including some with relatively low impact factor, which is probably a better representation of potential biases across the entire field of study. Moreover, several end users and stakeholders of science, including other scientists and public officials, base their research and opinions on a much broader suite of journals than Nature and Science.
So this new paper looking at a larger collection of publications and published estimates confirmed that top journals publish higher effect sizes. It’s almost like they said, “We did a more thorough search in the literature and we found all those missing points on the funnel plot in Reckova and Irsova.” See the effect size plot, which is figure 3 in the paper:


Notice that for the full collection of estimates (the left-most line marked "N = 1042"), the average estimate is close to the 1.6 estimate from the other paper. Essentially, the first paper said, “We found a bias in top-level, high-visibility journals. We filled in the funnel plot using a statistical model and got a best estimate of 1.6.” And the second paper said, “We found a bias in top-level, high-visibility journals. We filled in the funnel plot by looking at more obscure journals and scouring the contents of the papers more thoroughly and got a best estimate of 1.6.” The later paper should have acknowledged that it was coming to a similar conclusion to the Reckova and Irsova paper. But if you just read the title and the abstract, you’d be misled into thinking this new “better” study refuted the old one. If you Google the name of the paper to find some media reports on it, you will see that some reviewers read the title only, or shallowly skimmed the contents and didn’t read the papers it’s commenting on.

 Here is more from the Discussion section:
We also discovered a temporal pattern to reporting biases, which appeared to be related to seminal events in the climate change community and may reflect a socio-economic driver in the publication record. First, there was a conspicuous rise in the number of climate change publications in the 2 years following IPCC 2007, which likely reflects the rise in popularity (among public and funding agencies) for this field of research and the increased appetite among journal editors to publish these articles. Concurrent with increased publication rates was an increase in reported effect sizes in abstracts. Perhaps a coincidence, the apparent popularity of climate change articles (i.e., number of published articles and reported effect sizes) plummeted shortly after Climategate, when the world media focused its scrutiny on this field of research, and perhaps, popularity in this field waned (Fig. 1). After Climategate, reported effect sizes also dropped, as did the difference in effects reported in abstracts versus main body of articles. The positive effect we see post IPCC 2007, and the negative effect post Climategate, may illustrate a combined effect of editors’ or referees’ publication choices and researchers’ propensity to submit articles or not.

Remember, this is from a paper titled “No evidence of publication bias in climate change science.” Incredibly misleading. This entire paragraph is about how social influences and specific events have affected what climate journals are willing to publish. 

“What is the true climate sensitivity?” is really a central question to the climate debate. The 3⁰ C figure is frequently claimed by advocates of climate interventionists (people pushing a carbon tax, de-industrialization, etc.), but the 1.6⁰ C figure is more plausible if you believe there’s a publication bias at work. The actual concentration of carbon has gone from 280 parts per million in pre-industrial times to 380 parts per million today, and the global average temperature has risen by about 0.8⁰ C. (Maybe it's actually more than 0.8⁰ C; 2015 and 2016 were record years and some commentators are extremely touchy about this point. Apologies if I'm missing something important here, but then again any conclusion that depends on two data-points is probably not very robust.) If the sensitivity is low, then we can keep emitting carbon and it’s really no big deal. If water vapor significantly amplifies the effect of carbon, then we’ll get more warming per CO2 doubling. There is a related question of “How much warming would it take to be harmful?” To do any kind of cost-benefit analysis on carbon reduction we’d need to know that, too. But clearly the sensitivity question is central to the climate change issue. If there’s any sort of publication bias, we need to figure out how to correct for it. People who cite individual papers (because they like that particular paper) or rely on raw averages of top journals need to be reminded of the bias and shamed into correcting for it, or at the very least they need to acknowledge it.


This is just the beginning of a new literature, I’m sure. There will be new papers that claim to have a “better” methodology, fancier statistics, and a bigger sample size. Or perhaps there will be various fancy methods to re-weight different observations based on…whatever. Or different statistical specifications might shift the best point estimate for the climate sensitivity. (I can imagine a paper justifying a skewed funnel plot because the error is heterosketastic: “Our regression assumed a non-normal distribution, because for physical reasons the funnel plot is not expected to be symmetric…”) I’m hoping this isn’t the case, but I could easily imagine a world where there are enough nobs to tweak and levers to pull that we’ll just get dueling studies forever. There are enough "researcher degrees of freedom" that everybody can come to their preconceived conclusion while convincing themselves they are doing sound statistics. Nobody will be able to definitively decide this question of publication bias, but each new study will claim to answer the critics of the previous study and prove, once and for all, that publication bias does exist (oops, I mean doesn’t exist). My apologies, but sometimes I’m an epistemic nihilist. 

_______________________________________________________________________

It seems weird to me that there are only a few publications on publication bias in the climate sciences. "Publication Bias in Measuring Climate Sensitivity" was published in September 2015, and "No evidence of publication bias in climate change science" was published in February 2017. I remember trying to search for the funnel plot in early 2015 and not finding it. Possibly the September 2015 paper was the first paper ever to publish such a plot for climate sensitivity. If there is a deeper, broader literature on this topic and it comes to a different conclusion, I apologize for an irrelevant post. (Sometimes the literature is out there, but you just don't know the proper cant with which to search it.) But it looks like these two papers are the cutting edge in this particular vein. If more studies come out, I'll try to keep up with them. 

Friday, April 14, 2017

Nobody Is “Forced” To Live Under Capitalism

 I saw the phrase “…being forced to live under capitalism...” on Facebook recently. It was part of a meme from one of those click-baity left-wing pages, probably “Being Liberal” or something similar. Possibly a gullible friend had shared it. I immediately thought, Wow, what a whopping non sequitur of a concept.
If we take “capitalism” to mean free markets and free association between consenting adults, then no dear, you aren’t being “forced” in any meaningful sense. Rather, you live in a world where basic human freedoms are respected and you don’t care for the shape that it takes. You dislike some of the features of this world, but reshaping it to meet your approval would require actual force. You perhaps don’t approve of some of the choices and decisions that other adults make. But that’s the flip-side of freedom: other people get to exercise it, too. Freedom is a grand compromise. I cannot dictate the terms of your marriage contract, and you cannot dictate the terms of my labor contract. Your basic freedoms of association with other adults do not suddenly come to an end the moment money changes hands. Supposing you have a basic human right to privacy and freedom of association, you retain those rights when you transact commercially.

Of course the term “capitalism” is loaded. It has many definitions and carries a lot of baggage. Some use the term to mean “crony capitalism”, a system in which the government explicitly grants favors to certain businesses and industries at the expense of everyone else. This is nearly the opposite of what free-market supporters mean when they say the word. So any argument about “capitalism” should specify which sense of the word they mean. If it does mean “crony capitalism”, then indeed I am forced to live under “capitalism” and I object to it, too. I’d rather do away with the special privileges granted to certain players (import quotas, subsidies, implicit insurance via government bailouts, etc.). But a business operating in a truly free market is not being "privileged" in any meaningful sense. Businesses operating under free-market capitalism can only make offers to their customers and potential employees; the customers and employees have the power to unilaterally deny them the terms offered. You may dislike some of the terms being offered, but it is bizarre to describe this state of affairs as "forced to live under capitalism."

Perhaps to some the term does indeed mean “free markets” but it somehow implies an obsession with material wealth or betrays sympathy with businesses and capital owners. Such insinuations about motives and sympathies are beside the point (in addition to being extremely rude). Suppose I offer an argument that minimum wages and other labor “protections” are bad for workers. They restrict worker options and force them into terms that they otherwise wouldn’t choose for themselves, and they fail to transfer income from capital owners to workers. So the argument goes, anyway. Suppose I make an extended, data-rich presentation of this argument replete with historical examples. Does it matter that deep down in my dark heart I secretly carry a torch for the capital owners? Or that I hold some sinister antipathy toward the working man? Do you have to worry that such insidious sympathies have biased my analysis? No, you can check my work. We can talk impersonally and disinterestedly about the merits of policy without implying a wicked motive or perverse sympathies. Now, if I simply asserted “Free markets are for the best. Trust me, I’m some kind of expert!” and that was my entire appeal, you’d be right to point out something questionable about my motives. But if I’ve offered an impersonal argument for my position, you can check it for yourself. Motive-questioning is a bad faith move.

I actually have no idea what the person who shared this was thinking. Maybe s/he just flippantly hit the "share" button without giving it any thought. Maybe the main point was some other part of the quote, and I'm fixating on an irrelevant, throw-away piece that stuck out like a sore thumb. But I am increasingly seeing denunciations of "capitalism" and support for full-on socialism on my Facebook feed and it disturbs me. It's like some people don't realize that the 20th century happened. 

Income Inequality Is a Nonsense Concept

I’m imagining someone comparing me to one of my peers and describing the difference in life outcomes as “income inequality.” This is essentially what is happening when someone discusses inequality as a statistical abstraction. It’s always in a tone of “See this! There are huge discrepancies and it’s a big mystery why they exist.” I usually don’t share this, but my gut reaction is usually something like, “You and I went to the same school at the same time as me. Any divergence between you and me is a result of our different choices. I graduated high school, went to college, picked a STEM major, finished grad school with good grades, and completed a series of grueling industry exams. For whatever reasons, you did something else.”

I respect other adults and I don’t want to second-guess anyone’s decisions. I assume that if someone picks a bullshit major in college or picks an easy career path that doesn’t require much technical knowledge or specialization, they have a good reason. This person is simply picking a different mix of leisure and income than I picked. Or this person chose not to “sacrifice” the best party years of their late teens and early twenties hunkered down studying in pursuit of a real career. Someone with similar options and advantages made a different series of trade-offs.

The “income inequality” framing misses all of this. It implicitly blames the high-earners for the low earnings of everyone else. It strongly implies a zero-sum worldview where the wealth of the wealthy derives from the poverty of the poor. It assumes away all the choices that people make that actually determine their future career path (and thus their annual income). The inequality framing pretends that there is some fixed basket of stuff that gets divided up based on some arbitrary statistical distribution, and that we (“We, as a society…” as so many of these conversations start) can simply change the shape of that distribution by fiat.

I want to say, “Hey, man, I’m sorry your life didn’t turn out the way you wanted. Maybe we could have talked about this stuff back when you switched from a math major to a P.E. major. I didn’t realize I was on the hook for your bad decisions. Had I known at the time, I would have insisted on some changes.” That’s not to say I want to dictate the terms of anyone’s career trajectory. I really don’t. Nor is this to say I don’t want to be on the hook for someone else’s bad luck. I quite willingly put myself on the hook for the bad luck of thousands of other people, and I will effectively pay them a huge sum if they have a crippling injury, house fire, early death, or devastating car accident. I do this through various intermediaries: my health, homeowners, life, and auto insurance policies. And I’m fine with offering some sort of charitable aid to people who have uninsured misfortunes happen to them. What I’m not fine with is being put on the hook for the predictable bad consequences of poor decision making, and then being told that those consequences are my fault. 

All Races Have Two Mammae!

When I was in high school I played Shadowrun with a few friends. It’s a roleplaying game that takes place in a cyberpunk future. You could play as a human, elf, orc, troll, or dwarf. I remember my friends giggling over the Shadowrun rule book’s descriptions of the different races. Each race had a description of game-relevant stats (+4 strength, +6 body, -2 charisma, etc.), along with various other attributes like average height and weight. One item listed for each race was “2 mammae,” mammae being an obscure term for mammary gland. That is, FASA corporation (Shadowrun’s creator) saw fit to remind Shadowrun players that each race had two boobs. They did this even though this was a common feature to all races. I’m imagining a committee meeting at FASA as the player’s guide was being written:
Committee Note-taker: Okay, next race. Elf. Average height 6’1”,  average weight 160 lbs, 32 teeth. Anything else, guys? (a hand shoots up, note-taker emits a long-suffering sigh) Yes, Jenkins? 
Jenkins: Two mammae. 
CN: Dammit, Jenkins! All races have two mammae. 
Jenkins: Not necessarily! 
CN: Look, if we do the “two mammae” thing, the players are going to think FASA is staffed by a bunch of incorrigible boob-fiends. 
Jenkins: I’m just saying, people will be wondering. Like, does a troll just have two human-like boobs, or two long rows of nips like a nursing sow? 
CN: Okay, show of hands on the “2 mammae” thing? (Jenkins’ hand goes up, nobody else’s does). Overruled. (Pulls a sheet of paper out of a manila envelope.) Next race, the… twelve-titted wood nymph? Dammit Jenkins!