Monday, April 26, 2021

A Good Piece on "Long Covid"

Here is an excellent piece on "long covid" by Adam Gaffney, who describes himself as a pulmonary and critical care physician. He is remarking on the phenomenon of long-haul outcomes of a prior covid infection. The whole thing is worth reading. 

I've put down my thoughts on "long covid" repeatedly on this blog. I've said that this seems like the phenomenon often seen in policy advocacy. There is a technique for inflating the importance of a problem. It goes something like this. An advocate broadly defines the problem to include even minor instances of it, such that one gets the largest, scariest possible total. Then s/he offers the most extreme cases as examples of the problem being quantified (rather than offering, say, a typical example, or a random, representative sampling). I think that's what's going on with claims that long covid is a big deal. Yes, there are individuals with tragic long-haul symptoms. Scarring of the lungs, damage to the heart, and so on. We should supplement data on "death counts" with this information and provide useful context about how well survivors fare. But these severe outcomes just aren't typical. Read the piece. "Long haul" could simply mean someone is feeling "brain fog" for weeks or months after a covid infection. (As far as I know, I haven't had covid. But I've certainly felt "brain fog" in the past 13 months. Could this have anything to do with my regular working life being rearranged?)

Another point of caution is that some of these "long haul" health outcomes might have nothing to do with the prior covid infection. It's really difficult to assign a cause to something, whether it's a society-wide problem like rising crime rates or a personal one like chronic health issues. I've made the analogy before to doing an MRI for back pain. Often the doctor will look at such an image and find some insult, like a bulging disk, and use that to "explain" why the pain is occurring. But doctors find a comparable number of such insults on scans of completely normal people. It's weird to pick out some detail that's in the background of everyone's life and say, "This is the cause of your problem," given that the same condition fails to cause a problem for most individuals. Presumably some of these problems have something to do with people's lives being upended, their careers and futures wracked with uncertainty. It's going to be difficult to tease out whether long-term health outcomes that are caused by covid itself or the intense social isolation and traumatic shift in people's daily routines. (Anecdotally, I've been hearing about people taking up bad habits this past year. "The covid-15" anyone? I suspect we'll see some of this show up in official statistics.)

This seems really important:

First, a cause-and-effect relationship is typically unestablished in these articles.  The Times article contending that mild and resolved COVID-19 infections can lead to extreme psychosis months later left out some important context.  According to one international study, the incidence of psychotic disorders is around 27 per 100,000 persons per year, which would suggest that in the US, there are tens of thousands of new diagnoses of psychosis every year.  In 2020, a solid proportion of those new diagnoses will have occurred in individuals with a prior coronavirus infection.  Obviously, although the temporal link will no doubt feel powerfully suggestive to patients and their doctors, this does not establish causality.

Another reason to question the causal link between the virus and some “Long COVID” symptoms stems from the fact that some, and perhaps many labelled with long COVID appear to never have been infected with the SARS-CoV-2 virus.  For instance, in his August Atlantic article, Yong cites a survey of COVID “long-haulers” that found that some two-thirds of these individuals had negative coronavirus antibody tests, which are blood tests that reveal prior infection.  Meanwhile, the aforementioned study published on a pre-print server, organized by a group of Long COVID patients named Body Politic that recruited participants from online long COVID support groups, similarly found that some two-thirds of the long-hauler study participants who had undergone serological testing reported negative results.

[Read the piece. He goes on to acknowledge that serology can be negative for people who in fact were infected, but argues that it's implausible that all of these "long-haulers" who tested negative actually had covid.]

This issue is an epistemic nightmare for me. For all I know, the "long covid" alarmists are absolutely correct. But I can't trust anything they're saying. The public health establishment has not been an honest broker of useful information (and that observation predates 2020). It's been marshaling whatever arguments and "evidence" it can find in favor of extreme caution and official government lockdowns. Should I treat "long covid" as a serious concern? Or should I treat it like so many strands of half-cooked spaghetti that have been flung in my face over the past year? Is this narrative being picked up and repeated because of it's inherent plausibility? Or is there a media-government complex that creates a demand for this kind of terror-porn? As Bret Weinstein likes to say, our sense-making apparatus is broken. The tragedy is that we really do need useful and accurate information to navigate a public health emergency like the present one. 

Wednesday, April 21, 2021

Inexcuseable Ignorance About Covid-19

 Here is a recent post by Tyler Cowen in which he quotes a comment, presumably with his endorsement given the context. Don Boudreaux, who like Cowen is an economics professor at George Mason University, has some disagreements with his colleague about how best to approach the pandemic. You can probably get the thrust of their disagreement from Boudreaux's recent post here. Specifically, Boudreaux is baffled (as am I) by commenters like Cowen who downplay the relevance of the age-mortality curve for covid-19. There is something like a three orders of magnitude difference in mortality for the youngest versus oldest cohorts. (Almost four orders of magnitude, according to this page from the CDC.) It would be shocking if this fact had no relevance whatsoever for advising which institutions to shut down, or advising individuals on what kinds of risks they should take. But Cowen plays this down like it's not even a thing. 

Boudreaux is responding to a recent Econtalk in which Cowen was the guest. I listened to the same podcast and was likewise scratching my head at Cowen's comments. I wanted to respond specifically to some of the remarks made in the post linked to above (first link):

It is simply not a tenable policy to oppose pandemic lockdowns on the premise that COVID-19 only negatively affects a certain portion of the population. First, the fact that COVID-19 disproportionately killed the elderly was not something that was readily apparent right out of the box, when the virus was spreading rapidly. Hindsight is 20-20.

The first sentence is a raw assertion, not really justified by anything that follows. It was indeed apparent immediately that this virus had a disproportionate effect on the elderly, and it left children almost untouched. The Diamond Princess cruise ship gave the world the closest thing possible to a controlled experiment. Some very good information on the age-mortality curve came out of that episode. Here is a link I posted to my Facebook page March 14, 2020, right around the time that schools were closing and everything was shutting down. From that piece: 

Of the 416 children aged 0 to 9 who contracted COVID-19, precisely zero died. This is unusual for most infectious diseases, but not for coronaviruses; the SARS coronavirus outbreak also had minimal impact on children. For patients aged 10 to 39, the case-fatality rate is 0.2 percent. The case-fatality rate doubles for people in their 40s, then triples again for people in their 50s, and nearly triples yet again for people in their 60s. A person who contracts COVID-19 in their 70s has an 8 percent chance of dying, and a person in their 80s a nearly 15 percent chance of dying.

So, no, this isn't a case of "hindsight is 20/20". We knew very early on that children were basically not at risk, and young people up to about 40 or so were at no more risk than from other seasonal viruses. At any rate, there's not excuse for someone not knowing that. This calls for a directed approach to risk mitigation, not society-wide lockdowns (voluntary or involuntary). Closing schools outright was a mistake, and it was knowable at the time that it was a mistake. (Certainly, children with at-risk adults in the home should have had the option of doing their school work remotely. I'll even say that anyone who didn't feel comfortable sending their kids to school for any reason, good or bad, should have had the same option. That's a very different proposition from saying everyone must do school remotely. Come to think of it, I've seen Tyler endorse the idea that schools are basically safe and should be reopened. How can one take such a position without acknowledging the age gradient?) 

Back to the comment that Tyler re-quoted:

Second, focusing solely on mortality is short-sighted given that approximately one-third of all people who get over COVID-19 suffer “long haul” symptoms that persist for months and may even be permanent in some. We cannot simply claim that the non-elderly have no reason to fear COVID-19.

I feel like I've talked this point to death. I wish they would be more precise about the harm of "long haul" symptoms. Do one third of survivors have permanent severe scarring of the lungs? Does having a persistent cough for a month that then goes away count one as a "long hauler"? If it's the latter, it's really not so horrifying, probably not "become a total shut-in" worthy. People experience long-haul symptoms from seasonal flus and colds, too. Ever gotten a sinus infection or persistent cough after a bad cold? I have. It certainly sucks, but it's not "turn the world upside-down to avoid" level badness. I feel like people who are making this point are combining common but minor after effects with severe but rare after-effects to get a scary-sounding total. I've spoken to a couple of friends, both about 50, who had covid and had some long haul symptoms. One had a cough that took two months to clear up, the other said he'd been free of asthma medicine for ten years but now has to take it again. Those are pretty serious after effects, and I would personally take precautions to avoid them. But I just don't see those harms as warranting the extreme measures we're taking. 

The commenter next tries a war analogy:

So far, COVID-19 has killed more Americans than we lost in World War II, and it took the war five years to do what the virus did in one year. Even though the majority of the deaths were 65+, these are staggering numbers. Losing well over 100,000 people under the age of 65 in one year alone is nothing to sneeze at, and that’s with lock-downs and other harsh measures being taken. A “let them live their lives” approach would doubtlessly have escalated those numbers greatly.

I've always found this to be a pointless exercise: comparing the death total from some kind of social problem or public health crisis to the death toll from a major war. It's not even a remotely useful comparison. Cancer kills about 600k people a year. Should we have a society-wide mobilization of resources to fight cancer? Probably not. That depends on how responsive the problem is to our proposed policy "fixes". It's such a confused comparison, and yet I see it all the time. Deaths from disease are to a large degree unavoidable and unresponsive to public policy. Deaths from war are, in some sense, a price that a society (or its government anyway) has decided to pay to avoid some greater evil or to stop a looming threat. (Of course wars are often terrible blunders, but WWII is probably the best historical candidate for "involving ourselves in war to prevent an even greater number of deaths.") Death totals from disease and death totals from war just aren't comparable, and there are no sensible policy implications that follow from noticing that this number is bigger than that number. 

The line about "losing over 100,000 people under the age of 65" misses some important nuance. I don't understand where this cutoff of age 65 comes from. When virus "optimists" like myself mention the age gradient, virus "pessimists" start talking about numbers (or worse, individual cases) of people below age 65 dying or having serious complications from the virus. There is an age gradient, not a cutoff. People age 55 are at a greater risk than people age 45, who are at a greater risk than people age 35, and so on. People who talk about the age gradient as having policy implications (and how could it not?) are implicitly acknowledging the deaths at all ages. There is simply nothing special about the number 65. Perhaps more importantly, this glosses over the issue of comorbidities. Yes, there are younger people who die of covid-19. The vast majority of them have some kind of pre-existing condition that makes them vulnerable. If there are identifiable conditions that make us many times more likely to die from covid-19, that probably has policy implications with respect to "focused protection." (Again, how could it not?) 

I put this all down a couple of weeks ago when I was feeling annoyed with Tyler's flippant remarks, but then Don Boudreaux and Dan Klein ably responded. I hope to retire from this subject. Death counts from covid are falling and the vaccine rollout appears to be a huge success. Hopefully it will be a non-issue soon. But the matter of future policy implications looms large. "Who was more right?" is an important question, not merely for ego-stroking and bragging rights. Some of these issues really do need to be settled, because they will come up again the next time there is a major pandemic, or even the prospect of one that fails to materialize. 

Wednesday, April 7, 2021

Expert Opinion Is Not Science

I have been extremely disappointed by the way "science" and "expertise" have been brandished in the public discourse, and this past year has provided many atrocious examples of these concepts being abused. The implication is always that "science" is an infallible, objective approach to learning the truth. Not just any truth, but the immutable, undeniable, irresistible truth. One must merely "do the science" and such truth emerges fully-formed from as an output of the truth-assembly algorithm. Often this naive approach to thinking about truth is done by people who would feel out of place, even lost, in an actual science class. In other cases, it is indulged by people with the highest levels of education, possibly abusing their credentials to pretend their opinion carries more weight than it does. (I am connected to many Ph.D'.s through social media, who I know personally from my grad school days. I'd say many of them appreciate the finer points of philosophy of science, but it's shameful how many of them are content to play the role of the sneering professor.) There are many important questions that need to be answered in order to combat the pandemic. How effective are face masks at preventing the spread of disease? What are the benefits and risks involved with available vaccines? Did the virus emerge from one of the labs in Wuhan that was studying coronaviruses in bats, or did it emerge elsewhere? It simply will not do to declare an answer to such questions and call it "science" or "fact." And it is completely inadequate to line up some experts to give the same answers to these questions and call that "science." (As if these experts are doing "the science" in the background, which is impenetrable to outside observers, and just delivering the punchline.) Answers to such questions can be known with greater or lesser confidence, and there exists a right answer in some cosmic sense, even if it's unknowable. But such answers are ultimately the opinions of human beings and should be understood as such. I think it's shameful the way that tech platforms are declaring themselves the arbiters of truth, taking down this or that video, selectively "fact checking" content, and tagging borderline content with warnings and qualifiers. The denizens of social media are no better; I see people at all levels of education on my FB and Twitter feeds, speaking as though there is this shortcut to the truth. It's more about narrative control than it is about truth. 

If people are treating science as if it's literally like a high school science class, where you memorize facts and regurgitate answers to various test questions, then this is a huge mistake. Facts are certainly needed. You need to memorize the various entities and steps involved in the Krebs Cycle if you want to understand what's happening. To do chemistry, you need to memorize the number of bonds formed by common elements. And the theory of evolution by natural selection would be pretty uninteresting if you didn't know some examples of related species emerging from common ancestors, or if you weren't able to discuss the fitness advantage of a particular animal behavior or appendage. But the facts themselves are inert. They don't do anything except in light of some kind of theory. Saying "Proboscis monkeys have huge noses, isn't that cool!" isn't really doing science. Nor is it scientific to pronounce on the status of Pluto as a planet. That is the mere memorization of facts. (And the Pluto example is a mere labeling question, hardly even a "fact," certainly not an important truth of any kind.) An expert will know all of the relevant facts. They will be able to tell you the mass and orbital characteristics of Pluto. They might be able to tell you the mean and standard deviation of proboscis monkey noses, the sexual dimorphism, the differential success in mating of larger-nosed individuals compared to less well endowed males. It's generally fine to take an experts command of the facts at face-value. But to "do science" one would have to integrate these facts with some kind of theory, such that they militate for or against some hypothesis. Expertise does not entitle one to say, "Proboscis monkey noses are large due to runaway sexual selection," and be uncritically believed by a listener. And a non-expert who merely memorizes such pronouncements and platitudes isn't scientifically literate. One must engage with the theory and be able to consider competing hypotheses. "What if the large noses are functional? Maybe bigger noses are better for sniffing out fresh fruit or fertile females, such that there is a fitness advantage even independent of the female monkeys' preference for big noses. Maybe the female preference is chasing a real advantage." An expert could then marshal various facts for or against this competing hypothesis. Maybe nose size has no correlation with olfactory sensitivity, and the smaller noses of female proboscis monkeys (which are still large for a primate) call this into question. (As in, "Wouldn't this mechanism make female noses grow just as large as male noses?" The Wikipedia article on proboscis monkeys suggests that larger noses means louder mating calls, and suggests females "prefer louder vocalizations". Is it female preference alone, independent of any functional value? Or are louder calls simply more likely to be heard, thus bigger-nosed monkeys are simply more likely to be noticed amid background noise?)   

This isn't an article on proboscis monkeys. The specifics given above could be wrong in some important way (amateur or professional primatologists, feel free to comment), but that wouldn't matter to the point I'm trying to make here. Experts can be the memorizers and guardians of facts, but facts are small, tightly circumscribed nuggets of truth. E.g. the fact of a monkey's typical nose size, or the fact of a particular animal behavior. Broader statements about how the world works are not "facts." Statements such as "This appendage was created by natural selection via runaway sexual selection" or "The adaptive purpose of this particular animal behavior is X" are not facts. These are hypotheses, narratives that attempt to integrate disparate facts into a tidy whole. Of course, experts are free to have opinions about such things. They may even be entitled so some degree of deference when someone skeptically challenges their conclusions about such matters. They do, after all, have the relevant facts at their fingertips, and their academic training has probably led them to think carefully about competing hypotheses and reach a conclusion. But it would be scientific malpractice for them to conduct a class in which the students simply memorized their conclusions and recited them back. 

I feel like many commentators are attempting to skip the hard part, where the actual thinking takes place and the engagement with competing hypotheses is done. There is an over-reliance on the opinions of "experts" by lazy journalists and the even lazier consumers of journalism who can't be bothered to crack a textbook or do a literature review. While it may be reasonable to take some sort of expert consensus as a starting point, it is wholly inadequate to simply chant "Expert consensus!" at someone who raises a plausible competing hypothesis or counter-narrative. The reliance on experts stems from something like the following thought process:

The expert has all the relevant information inside his head. He has integrated this data into a coherent whole and has already done the work of ruling out competing narratives. Counter-narratives must thus be coming from ignorant laypeople, who don't have a command of the relevant facts. Or they come from cranks, who have the facts at their disposal but whose reasoning ability is deeply compromised.

Again, this is okay as a starting-point, but it fails to incorporate any kind of error correction. It insists that the error-spotting-and-correcting process has been done, and the output is somehow infallible. I think there is an even stronger version of the above story, where the process that leads experts to their opinions is shrouded in mystery and inexplicable to outsiders. It looks like the following:

Experts have much more than a simple command of mere facts and data. They have deep insights that cannot be shared with amateurs or explained to a lay audience. Their internal process for integrating facts and data into a coherent worldview is ineffable. A single sentence spoken by such an expert cannot be unpacked without years of academic training. Every word or phrase they speak must be defined and contextualized. The venture of analyzing their plain-spoken pronouncements is doomed from the start. One can no more "explain" expert opinion to a lay audience than one can explain how a human being can recognize another human's face. It would be akin to "explaining" how a long, complex piece of computer code generates it's output, or how a neural net model with thousands of fitted parameters generates predictions (say, on which picture is a "cat" or "not a cat"). There is simply no shortcut that would make "expert opinion" legible to outsiders. It is a black box whose output must simply be taken on authority.

I don't know if anyone would literally sign up for this version of the "believe expert opinion" story, but some commentators are implicitly appealing to it. If someone presents a reasonable sounding challenge to expert opinion, there should be some sort of coherent response available. "Why do climate proxies (tree ring thickness,  concentrations of various isotopes, etc.) fail to predict temperature in the only era (1970s to present) where we have extremely high-quality data for both? Does this in any way compromise the attempts to reconstruct the paleo-climate using these proxies?" There can be a perfectly good answer to this question, and maybe its premise is not even true. But the answer should be something that's consistent with the scientific narrative that the experts are telling us. The answer shouldn't be, "Look, we're a bunch of really smart people who spend all of our time building very complicated climate models. We spend our waking hours pouring over the output of these models, checking them against historical observations, refining the models, and gaining insight. The explanation would be too 'mathy', and your attention span would waver. Even assuming math comprehension weren't a barrier, there is simply no way to coherently communicate the insights built up over decades of practice. Just take our word for it." Given the sneering response I often see to challenging questions (and the climate proxies example is one such), I'm comfortable saying that some people have indeed backed this stronger version of expert opinion. 

Where should I even start with this? If this story is true, then we're all basically fucked. There can be no interdisciplinary research in this world, because even adjacent disciplines could not speak to each other. Could we even have confidence that experts within the same discipline understand what each other are saying? How could we if every word spoken has a subtext of "ineffable knowledge"? How could we know that two experts speaking to each other are speaking the same language? How could we be sure that they are unpacking the same words with the same tools? (I'm not the first person to suggest that "experts" speak a different language than us normies while using the same words. Here is Bryan Caplan expressing his horror at the question of academic speech being impenetrable and giving the example of Paul Ehrlich basically admitting this was the case. Is Ehrlich a one-off, or is he just tip of a massive epistemic nightmare floating below the surface of public discourse?) 

I have a better story. Often experts can't give good answers to challenges by educated laypeople. They are sometimes unable to convince practitioners in adjacent disciplines of their "consensus" narrative. This isn't a sign of deep, tacit knowledge. It's a sign of group-think. Excessive specialization and navel-gazing causes experts to have a view that is too narrow. Frankly, and contra the "tacit knowledge" story, it's not necessarily all that deep, either. 

Let me make just a few observations that are self-evident to most people. 

There is often enormous social pressure to conform. A crankish-sounding theory might never get its day in court, because its proponents are silent. This is definitely happening in climate science. Scientists who underplay the role of CO2 in the planet's recent warming (there are very few who doubt it entirely, contrary to the "denier" sneer) are professionally ostracized, their credentials called into question. I recommend the book The Hockey Stick Illusion by Andrew Montford for a powerful telling of this story. Some hacked e-mails from 2009 uncovered an explicit conspiracy to oust climate scientists who were insufficiently loyal to the alarmist narrative. The alarmists discussed removing certain scientists as reviewers from various scientific journals. Now, the alarmists could be very right about the scientific questions. But one should be skeptical of any claims that come out of an environment that is this hostile to non-conformists.  See also this example of how one hypothesis about the extinction of dinosaurs became dominant. There is a great deal of bullying and social pressure, and it makes me doubt the asteroid hypothesis despite it's intrinsic plausibility. The fact that other hypotheses were systematically excluded from consideration should automatically make one skeptical of the "consensus" view.

(There was a recent Dark Horse Podcast with Bret Weinstein and Heather Heying that touched on this question. Bret is himself something of a climate alarmist. He's gone so far as to suggest that carbon emissions present an existential threat to humanity, or anyway that they should be treated as such in a "tail risk" sense. In a recent Dark Horse Q&A, he states that there is enormous pressure to conform in the climate sciences. He knows this, because whenever he or Heather say something "incorrect" he gets brutally corrected by a torrent of e-mails. And he quite wisely suggests that this pressure to conform must be even stronger for someone operating within the climate sciences. He doesn't take this as an opportunity to doubt his own alarmism, but he is definitely on to something.)

Am I getting this backwards? Perhaps scientific certainty comes first. All right-thinking people agree to the correct theory in fairly short order. Social pressure and bullying are last-resort remedies for crankish hold-outs. This is the story that "trust the consensus" crowd wants to tell us, but it's not really plausible. Some of the statistical arguments in the paleo-climate wars are legible to me, for example. The responses to Steve McIntyre's critiques by establishment people amounted to childish name-calling and credential-brandishing. So it always goes.  

Experts often disagree with each other! Have you ever heard of a "second opinion"? The notion that two doctors might disagree on the diagnosis and/or the optimal course of treatment? See the part just after paragraph 2 in this post, where I link to two studies on disagreements among doctors on cause of death determination. These are the most highly educated people in society, practicing their craft and making pronouncements on the cause of a person's death. They tend to not reach the same conclusion given the same set of facts. 

The previous section is slightly in tension with this one. Wasn't I just saying there is social pressure to conform? That we should expect to see hive-mind behavior, and now I'm saying look over here at all this expert disagreement? I think that comparing the opinions of doctors on individual cases gives us a useful insight into "expert consensus" regarding bigger questions. There is no social pressure to conform in the case of an individual cancer diagnosis or cause of death determination. The doctor is, for all s/he knows, making the first (and perhaps final) call on the question. Some bigger questions are just as complex as these individual determinations. Suppose we could mind-wipe all climate scientists or public health professionals, erasing all knowledge of colleagues' opinions but somehow leaving their practical scientific knowledge intact. If we asked such scientifically literate blank slates to reach a conclusion on the magnitude and causes of climate change or the effectiveness of masks in preventing the spread of pandemic diseases, I'm sure we'd get a broader range of opinion than what we're currently seeing. An implicit assumption of the "expert consensus is good" story is that these experts are all independently reaching the same conclusion. That would indeed be impressive, but the social pressure to conform means we can't assume independence of expert opinions. 

Besides, there are areas of academia where the experts disagree with each other on big, important questions. See the disagreement among economists on the effect of minimum wage on employment. This is still quite contentious, and nobody is credibly claiming to have "settled" the matter, given the loud and persistent disagreement from the other side. The disagreement between Richard Dawkins and Stephen Jay Gould comes to mind as well. In some of these cases I am knowledgeable enough to have an opinion, even to declare one side of the disagreement "clearly" wrong. But it would be a mistake to declare that "science" has rendered a definitive verdict. Science is the process of uncovering truth. Any verdict one could render, however sound, however confidently we hold it, however well it maps on to the cosmic truth, is ultimately just someone's opinion.

Political decisions are made first, "expert consensus" is marshalled in favor of desired policies. Often it is the policy tail that is wagging the science dog. Some kind of policy is crafted, and scientists are employed not as honest truth-seekers but as lawyers whose job is to zealously represent their client (in this case, the state). I've seen multiple examples this year of "consensus" turning on a dime to accommodate some new trend or some change in the received "wisdom". (See section 7 in the SSC post, on the change of the consensus position on masks wearing.) I found it shocking to read this paper, titled Disease Mitigation Measures in the Control of Pandemic Influenza, published in 2006. I came across it back in April of 2020 and noted at the time how much the consensus view of public health had changed in less than a month. Be suspicious whenever the party line whipsaws so dramatically.

It's also important to keep in mind that you can't get an "ought" from an "is". Even assuming the experts are all speaking in one unified voice and they are telling us the immutable truth, they have no right (in their capacity of subject-matter experts) to make "should" statements. It's not their business to create policy. I wrote here about the inadequate moral philosophy underpinning the public health establishment. Instead of just giving us accurate information about risks and benefits, public health "experts" appear to be deciding what decisions we should want to make. They then feel entitled to bend the truth, to tell us whatever noble lie we need to hear so that we'll make the "right" decisions. For example, the FDA grossly exaggerates the risks of vaping and excessively regulates vaping products. This makes smokers less likely to make the switch, either because they believe the FDA's misinformation or because the products are made unavailable or unduly expensive due to regulation. They would love for everyone to stop smoking and vaping altogether, but that's a value judgment, not an "expert opinion." It doesn't matter how well trained you are as a doctor or a biostatistician. Normative questions, questions regarding what people should do, are theological in nature. They are not scientific, and scientific expertise doesn't qualify someone to make such judgments. 

I have seen a similar bending of the truth this past year. There has been an odd refusal to acknowledge almost any mitigating information regarding the coronavirus. There are several things that we knew since the beginning of the pandemic, either because they are obvious or because we had good data even early on. I pointed out several here: past infection confers immunity, young people's risks are orders of magnitude lower than those of seniors, death certificates can contain errors (possibly in a systematic way that makes population-level statistics unreliable), the virus isn't really spreading in schools. The response to anyone who pointed out these truths has been some combination of apoplectic outrage and a denial either of the claim itself or (and this one must take some mental gymnastics) an admission of the raw fact but the denial that it has any policy implications whatsoever. The differential risk for young people, and the refusal to admit to the implications of this fact, is especially galling. The public health establishment somehow decided that young people shouldn't be socializing, even though the virus poses miniscule risks to them. Maybe they were thinking, "We can't tell young people to go about socializing like normal, because then they'll spread Covid to older individuals." Fine, say that. The advice to young people who are living with vulnerable individuals should be different from the advice to young people more generally. "You're not at risk, but if you catch the virus you can put someone else at risk," is a perfectly sensible, nuanced message. Perhaps the public health establishment was worried about long-haul effects of the virus. That's fine, they should say that, too. But they should admit that this is a guess. They should acknowledge that there are serious problems with treating speculative, unquantifiable risks as all-trumping factors in the decision calculus. (Sometimes called the "precautionary principle", this is really an example of Pascal's Mugging.) Such a guess about unseen risks isn't science, no matter how many subject matter experts agree that it's worth worrying about. 

Human thinking just isn't that deep, even thinking done by experts. Above I present a model of expert opinion-making as a deeply structured black box, similar to a neural net model, where tons of relevant information go in, some kind of learning process happens, and an opinion comes out. I'll suggest here that this just doesn't describe how anybody actually thinks. Our brains can only hold a tiny amount of  information in active memory at any time. There is no process analogous to a machine learning model that literally reads all the relevant information into memory at once. The mind is not like a regression algorithm (or neural net or gradient boosting machine) that passes over every available data-point; the brain's "best fit" algorithm (whatever it is) does not scrupulously ensure that the model of reality is informed by all information it has seen. I don't want to overstate my point. Experts are certainly better than nubes at digesting disparate information and manufacturing a coherent synthesis. They likely learn to hold large concepts in their working memories, concepts that would overload someone not familiar with their field. They have done the hard work of collapsing disparate details and long derivations into tightly packaged concepts. When they do chunking, their "chunks" are larger and more profound.  But human brains are imprecise and subject to all kinds of cognitive biases and blind spots. We should not model experts as "having thought it all through" in a mechanical sense. So we shouldn't grant them infinite deference as if their minds were impenetrable black boxes. We should expect them to present their reasoning and subject their analysis to public audit. If these presentations contain elementary errors of logic or math or factual mistakes, we should feel comfortable pointing them out, not simply content ourselves that the "experts must be right, just for the wrong reasons."

I want to stretch the machine learning analogy further. A facial recognition algorithm (or any other kind of classification algorithm, like a logistic regression or an xgboost model) doesn't just look at a picture and say, "That is Bob". A model trained to recognize photos of cats likewise doesn't say, "This is absolutely a cat." The model outputs some kind of probability. All the pixels of the photo go into the model, a lot of matrix multiplications happen, and out comes some number, say 0.85, representing the model's probability estimate that it's a cat. To get to "this is a cat", one has to set some kind of threshold, such as "everything above 80% is a 'cat', everything else is a 'not cat'." 

Does human reasoning do anything vaguely analogous to what machine learning does? Do expert judgments on important questions come with implicit probability estimates? Are statements regarding the effectiveness of mask-wearing or climate mitigation measures undergirded by probability point estimates? ("There is an 80% probability that what I just said is true.") Do these point estimate come packaged with confidence intervals? I'm going to say "not really." Unless they are trying really hard, people (even very smart people) think pretty shallowly. They at best come up with vague impressions that they think are true, and they only compare competing hypotheses when explicitly prompted to do so. When a numerical probability is absolutely demanded of them, people will disagree about what is meant by terms like "probably", "surely" and "almost certainly." (I recommend Philip Tetlock's books on expert predictions, Expert Political Judgment and Superforecasting. Even experts are pretty bad at this whole prediction business when nailed down to a precisely stated prediction including a numerical estimate. There is a long section in Superforecasting on the inconsistent meaning of terms like "probably" when used by intelligence analysts. Presumably they aren't the only kinds of experts plagued by this imprecision.) 

I haven't yet touched on the requirement for experts to be multi-disciplinary. A public health official recommending a mask mandate isn't just calling on medical expertise. It's not sufficient to simply do a literature review, conclude that masks probably work, and then pronounce that a mask mandate is a good idea. That would require some knowledge of PPE production capacity, some economics to understand the costs of ramping up production in one sector (which logically implies reduction of capacity elsewhere in the economy), social psychology to understand compliance issues, and so on. It requires the ability to do a big cost-benefit analysis that accounts for many factors, and no one person has the expertise to do this. The expert's internal "black box" simply isn't accounting for all the relevant information. That's not to say public health professionals shouldn't even be trying, or shouldn't have opinions on such matters. They should. But the public should be questioning and auditing their pronouncements, not treating them like high priests of an unquestionable religion. Prod an expert, and you'll find that their black boxes can be opened, and the content is mostly legible. 

(Everything in the above paragraph goes equally well for climate scientists who make policy recommendations. These experts deserve some degree of deference when they're answering questions about the climate sensitivity, rates of loss of ice, and so on. But they are not qualified to make or pronounce upon policy issues, not in their capacity as climate experts anyway. Even leaving aside the "can't get an ought from an is" issue, policymaking requires knowledge of economics, cost-benefit analyses, existing and emerging technology, etc. A climate modeler is likely to be narrowly focused on their one area of expertise. Their internal "black box" is almost certainly not incorporating a deep understanding of economics and emerging tech. So they are unlikely to be able to say what is the optimal carbon tax, or whether a carbon tax is better than a cap-and-trade, or whether cheap nuclear fusion power will come within the next thirty years and make the carbon question moot.)

Let's have a better public discourse. Let's allow heterodoxy into the public square. Let's tolerate reasonable questions by informed laypeople, even at the risk of giving a few cranks a platform. If the experts are unable to convince informed outsiders of their consensus views, let's stop credulously deferring to them. I'm very tired of seeing "science" brandished as if it's a shortcut to the truth, a "right answer" that can simply be bubbled in on a multiple choice exam. 

________________________________________

I wasn't sure where to put this into the flow above, but I wanted to offer an example from Jim Manzi's excellent book Uncontrolled. Manzi describes a situation where a head of state is soliciting expert advice. In one case, he asks a historian for their opinion on current geopolitical topics, drawing on the historian's knowledge of analogous events from the past. In another case, he asks a nuclear physicist about the prospects of a rival state developing a nuclear weapon. Manzi suggests that it would be unreasonable for the politician to supplant his own judgment for that of the physicist. But it would be irresponsible not to second-guess the historian. Both are examples of "expert opinion." But in one case, there is a "right answer". The rival state's nuclear-readiness is dependent on things like the quantity of fissile material available and the sophistication of their technology. It's basically an engineering question. But the historian's opinion is intrinsically wrapped up in the individual's ideology and internal biases. Which player in the current ordeal is analogous to the "bad guy" in the historical anecdote? Which historical episode is the right one for comparison? Is there an overlooked counter-example? Is the historian hobby-horsing with his favorite pet theories? When we ask pure scientists to opine on government policy or big not-strictly-factual questions beyond their immediate area of expertise, we are converting them from pure scientists to historians. They need to learn to take a step back and say, "I'm no longer speaking as a scientist, where I am truly an expert and deserve some deference. I'm straying into the humanities. I'm doing social psychology, historical analogizing, and moral philosophy, subjects I am in no way superior to any of my listeners." 

I wanted to mention another example of "experts" insisting that an outsider merely misunderstood their field and wasn't competent to critique it. The recent "hoax" papers by James Lindsay, Peter Baghossian, and Helen Pluckrose were collectively an amazing piece of scholarship. Lindsay describe the impetus for the papers in a recent podcast. I recommend listening to the entire thing. Postmodernists and critical theorists kept telling him that he "just didn't understand" their philosophy. So he, Baghossian, and Pluckrose set out to publish utter nonsense in their "academic" journals. They succeeded. And Lindsay is very clear that these articles weren't "hoaxes". They used exactly the tools and the academic language of the scholars he was criticizing. The trio's spoof articles were really indistinguishable from the other drivel published in those journals. It was proof that he and his crew did indeed understand the philosophy they were critiquing. They offered a Trojan horse, and postmodern journals scooped it right up. 

Monday, March 1, 2021

Against Medical Nihilism!

 A couple of years ago I reviewed the book Medical Nihilism by Jacob Stegenga. Broadly speaking, Stegenga's narrative is that medical interventions are mostly ineffective. There are some obvious exceptions. Emergency medicine really does stabilize trauma victims and save their lives, or less dramatically, fixes broken bones in place so they'll knit properly. Prenatal care, lifestyle interventions, sanitary water supply, and vaccines can be highly effective. But generally speaking, pharmaceutical interventions don't work all that well. Huge clinical trials using very large samples often find only trivial differences between the control group and the treatment group, casting doubt on whether the treatment truly has a significant effect on the disease. Even when a study finds a "statistically significant" difference, it can be an illusion or a fluke or a result of p-hacking. And "statistical" significance does not imply clinical significance; the effect of the medication on a disease can be so small that the cost-benefit analysis is overwhelmed by side effects. We like to think we live in an age of science and enlightenment. We'd love to believe that basic research has brought us to a mature understanding of how our bodies work, and we can simply engineer new treatments based on our first principles understanding of our own biology. The reality is that we can't engineer cancer cures or cholesterol medications from first principles. We discover empirically, that, say, a blood pressure medication treats erectile disfunction, or a cancer drug turns out to be effective at treating Alzheimer's (maybe). It reminds me of the saying, "Thermodynamics owes more to the steam engine than the steam engine to thermodynamics." In other words, practical discovery comes first and scientific theorizing follows, rather than the other way around. Our bumbling, plodding research into new pharmaceuticals occasionally yields an effective treatment by chance, after which we might be able to back-fit a biological mechanism. The narrative of our conquering nature with scientific theories is mostly wrong. I won't rehash his entire argument here; please read my prior post for that, or pick up Stegenga's excellent book. 

I wanted to check in to say, "To hell with all that!" Stegenga's story is true enough for certain classes of medicine, but we have all just witnessed a massive counter-example. See this piece by David Henderson and Charles Hooper, particularly the timeline:

The Moderna lab in Massachusetts took all of one weekend to formulate the vaccine, which was ready on Monday, January 13. David Wallace-Wells of New York magazine writes, “It was completed before China had even acknowledged that the disease could be transmitted from human to human, more than a week before the first confirmed coronavirus case in the United States.” 

Here’s the timeline:

January 13, the mRNA-1273 vaccine is formulated

February 7, the first clinical batch is produced

February 24, Moderna ships the first batch to the NIH for a Phase 1 clinical trial

March 4, the FDA gives permission to conduct a Phase 1 clinical trial (safety only)

March 16, the first human subject is given a dose

March 23, Moderna begins scaling up for commercial production

March 27, another Phase 1 clinical trial begins

April 27, Moderna ask the FDA for permission to run a Phase 2 clinical trial (safety and efficacy in a limited number of test subjects)

May 1, Moderna and Lonza announce a plan to manufacture a billion doses a year

May 12, the FDA gives Moderna Fast Track designation for mRNA-1273

May 18, Moderna announces positive Phase 1 clinical results

May 29, the first test subjects are dosed in a Phase 2 clinical trial

July 14, Phase 1 results are published

July 27, a Phase 3 clinical trial begins (safety and efficacy in a large number of test subjects)

July 28, non-human primate study results are published

November 16, Phase 3 results show the vaccine is 94.5% effective at preventing infections

November 30, the FDA announces that it will convene an advisory committee meeting on December 17

December 2, the U.K. approves a similar vaccine from Pfizer and BioNTech

December 11, the FDA gives the Pfizer/BioNTech vaccine an emergency use authorization

In other words, scientists developed a 95% effective vaccine on their first try using basic first principles. The vaccine was designed over the course of a weekend. It doesn't seem to be a fluke, either. There are two vaccines authorized for emergency use in the US and several more promising ones under development. The two that have been authorized are both mRNA vaccines, something that didn't really exist before the pandemic, except as a scientific curiosity. If I'm getting this right, the vaccine is a shot of mRNA that gets into your cells and tells them to make proteins specific to the SARS-Cov-2 virus. Your immune system learns to spot these proteins and develops antibodies to them. So you get immunity without the need to fend off the live virus. I find this very cool. The mechanism of action is something one could imagine with only a high school level understanding of biology. The exact engineering feats required to turn that idea into a working vaccine are no doubt more complicated, but then again scientists at a lab got it right basically on the first shot. And this was in January, before anyone was talking about lockdowns or freaking out about the coronavirus. This happened before there was any massive mobilization of resources to come up with a vaccine. 

In my previous post on Medical Nihilism, I invited the reader to think about the trajectory from present day to a Star Trek future with unimaginable treatments for all known diseases. It matters a great deal to our descendants' well-being whether that future is a mere 100 years off or a full 200 years off, so we should do what we can to speed up the transition. I basically accepted Stegenga's thesis that most new medicines are ineffective, but I argued that we'd need to do a lot of slow, plodding experimentation and tolerate a lot of false promises to find truly useful treatments. That's likely true for many types of medical problems, but I should probably update my priors. Maybe SARS-Cov-2 was a uniquely easy puzzle to solve, but the speed of development, the multiple early successes, and stunning effectiveness makes me think there are more opportunities for "first principles medicine" lurking in the background. I'm going to double down on my observation that Stegenga got the policy implications exactly backwards. He suggested that the low effectiveness of medicines implied that the FDA should be stricter about the approval process. I think he didn't appreciate that this could be a self-fulfilling prophecy: strict approval guidelines means we see less experimentation, slower progress, and fewer breakthrough medicines. New treatments are being held back by stifling regulations. Let's give drug developers the right to innovate and give patients sovereignty over their bodies so they can try these new medicines. 

I don't want to over-apply this lesson. Maybe the example really is very specific to vaccines, or even specific to coronavirus vaccines. That's still huge. If we can have a vaccine ready to go within weeks of discovering a new virus, we have the tools to stop the next pandemic before it gets going. Even if that only happens once a century it's a big deal. (There have been flu seasons comparable to 2020 in terms of mortality, so "once a century" is likely an underestimate for the frequency of deadly pandemics.) Even if this truly is just a one-off, even if all it does is shorten a single global pandemic by a few months and save a hundred thousand lives, it still stands as an impressive (and high-impact) counter-example to a narrative that I mostly bought into before. 

______________________________

I remember Tyler Cowen quoting someone back in June or July regarding the slowness of vaccine development, in a blog post that I couldn't find now if I tried. The quote was something like: Isn't it funny that it takes months or years for the world's best scientific minds working in concert to develop a vaccine, but our body just does it in a week or two? This was back when people were saying a vaccine might be two years off, with some even saying we might never have one. Turns out we had one in mid-January. Some clever scientists know precisely what your immune system is trying to do, and they figured out how to trigger it without making you sick. 

Sunday, February 21, 2021

A Better Way To Do Drug Approval: Continuously Updating Reports

My (admittedly limited) understanding of the FDA's drug approval process is that it waits for a clinical trial to be complete, reads the drug company's study, then says "Yea" or "Nay." This is absurdly inefficient and no doubt leads to thousands of deaths and untold suffering. The bureaucratic delay of life-saving medicine is an atrocity, and the FDA's foot-dragging approval process for Sars-Cov-2 vaccines is a particularly stark example. A more reasonable approach to drug approval would be to have the company submit a continuously updating document that the FDA can monitor on a periodic basis. Better yet, the FDA can pre-specify an approval threshold, so the pharmaceutical company can anticipate if and when approval will be likely. Once a "statistically significant" drug effect has been demonstrated, the FDA can start ramping up production and recouping it's investment by selling the new drug. No need to wait for the clinical trial to play out to completion, though of course such trials should be completed and the results published. More time means more data and (hopefully) more certainty of a drug's effectiveness. In the meantime, some patients can have the benefit of a new drug that improves quality of life, or extends it. 

A while ago, I read the FDA briefing document for the Pfizer vaccine.  I'm fairly sure it was this document. My wife had printed off a copy, which I read and marked up with notes. As a front-line health worker, she'd be one of the first in line to get it. It was being discussed whether immediate family (my kids and I) would be near the front of the line, so we wanted to educate ourselves on what was known about the vaccine. (It turns out the kids and I will have to wait, like all the other normies. In my opinion, this is perfectly sensible as a triaging strategy. I'm happy to see older and more vulnerable individuals get the vaccine first.) 

I was struck by the repetitiveness of the document. It seems as though the same information is repeated in different sections, as if the authors were trying to conform to a template set by the FDA? (Can anyone confirm?) 

The other thing that struck me was the figure below. And when I say it "struck" me, I mean it filled me with a cold rage, and it still does:


The red line is the cumulative case count for the unvaccinated control group, and the blue line is the vaccinated group. It was evident by week 3 that there was a difference between the two groups, and by week 4 or 5 it was definitive. And yet we had to wait for Pfizer to complete it's proposed trial, then wait on the FDA to look it over and stamp it "approved." Now I know there is a great deal of paranoia about spurious statistical results, replication crisis and all. That really is a serious problem which casts doubt on a lot of published research, including research on the efficacy of new drugs. But I think people have been blinded by their zeal for academic rigor. There is this sense in the current zeitgeist (at least in academia) that a result isn't real, or is highly suspect, unless it's exactly what you set out to test for, using the exactly specified methodology. No doubt, pre-committing to a methodology prevents "p-hacking", where you run hundreds of tests on your data until, by sheer chance, one of them gets the result you want. With all of that duly acknowledged, sometimes a result is so strong you don't need to doubt it just because "It's not exactly what we set out to test." I think this is one such case. 

Here's my proposal. The FDA briefing doesn't have to be submitted fully formed. Pfizer doesn't need to wait until all the data is in, then take whatever time is needed to massage all the data into a narrative with various charts and graphs. They can build a document that runs and updates every day, pointing to a database that also updates daily (or weekly or whatever periodicity makes sense). It seems like there are all these choke-points in a process that should be continuous. In today's broken world, the FDA really can defend itself by saying "We couldn't have made a decision until December because the results weren't in yet," and Pfizer can defend itself by saying "We couldn't have moved any faster, because we are just reporting our results according to the FDA's regulatory structure." They need to come together and build a process that isn't held up by waiting for some kind of hand-off.  Pfizer should have been submitting a daily-updating report of its cumulative findings, and someone (or some dozens of people, given the importance of timeliness) should have been reviewing it on a regular basis. They would have discovered the diverging blue and red lines in the graph above much sooner. It would have been thoroughly obvious by, say, day 50 that the vaccine was effective, at which point it would have been unethical to delay approval any longer. The x-axis goes out to 119 days. We could have been at least two months ahead of where we are in terms of ramping up vaccine production and getting shots into arms. Instead, we're standing where we are today. Quite possibly, we could have beaten back the surge that started in November and saved hundreds of thousands of lives. Inexcusable bureaucratic foot-dragging has killed countless people, over a thousand a day since November (and peaking at above 3,000 daily). 

(A couple of examples of what I mean by "continuously updating report." In my job as an actuary, I use RMarkdown to build reports. Python users will be more familiar with Jupyter notebooks. I'm sure there are dozens of analogues that work with other programming languages and suites of statistical software. Basically you write some code (R, Python, or whatever you like) interspersed with narrative text. A human can write in the narrative and change that as needed, but the numbers and table themselves are updated by re-running the code. They will update as the data tables they're pointing to get updated.)

The FDA can pre-specify some kind of automated approval threshold. The statistical significance of the vaccines effectiveness became clearer and clearer as the days dragged on and the red and blue lines diverged. Instead of insisting on a pre-registered clinical trial fully playing out and taking weeks to review the resulting study, the FDA could simply say, "The vaccine is approved as soon as it passes the following statistical test..." Pfizer dutifully updates its report on a daily basis and can start selling doses of the vaccine the day it passes the statistical test. In addition to speedier approval, it would have the advantage of allowing Pfizer to anticipate the approximate timing of approval and start ramping up production. Pfizer is unlikely to pre-commit millions (billions?) of dollars to production line infrastructure if it is dependent on the whims of an arbitrary bureaucracy to give it the green light. On the other hand, if there is a pre-designated threshold, Pfizer can anticipate the likelihood and timing of approval.

There are narrative parts of the document that can't simply be updated automatically. For example, there is a section on Bell's palsy occurring in several of the treated individuals (which was probably not caused by the vaccine). Someone would have to notice this phenomenon and write it into the text of the document. But that's easily doable; some sections can robotically update based on incoming data and others can be updated "by hand" as the facts on the ground become known. 

If the FDA is already doing something like this, feel free to tell me I'm way off base. Whatever they are doing, even if it does involve periodic review of ongoing trials, it's still absurdly slow. Even the prospect of saving hundreds of thousands of lives hasn't spurred them to speedier action. Tyler Cowen and Alex Tabarrok at MarginalRevolution have been great on this point. They have been absolutely hammering the FDA for its intransigence. There is a species of "public intellectual" that has been standing athwart this push to approve vaccines faster. These individuals are making very poor quality arguments and not taking seriously the thousands of lives a day that are being lost. (I'm sure you'll find a good sampling of what I'm talking about if you just read the comments of a few of Tyler or Alex's blog posts.) I don't know what's in their heads. It must be something to the tune of "We need to preserve standards for pharmaceutical approvals, or else we risk a rash of approvals for harmful drugs." Or "It is, from a deontological standpoint, unethical to give medicine that's not thoroughly tested using the most rigorous standards of evidence." Okay, I hear all that, but...at the cost of thousands of lives a day? These people should tell us how many corpses they're willing to pile up at the altar of "rigorous approval standards." Is there even a limit? Do they think that the FDA's slow-moving approval process is, in an expected value sense, actually saving lives? No. There is something deeper going on here. The real issue is that this is a regulatory failure. It's a failure that has resulted from a dysfunctional bureaucracy running on auto-pilot. And some shallow intellectuals won't be caught dead on the side of those icky libertarians who have been saying so for decades. They lack the intellectual tools to even recognize the root cause of the problem, nor do they possess the moral vocabulary to denounce it as outrageous. 

_______________________________________________

Under a pre-registered approval threshold, could a drug cross the threshold, then "cross back"? Maybe some cancer drug gets approved according to the automated trigger, then the next day a couple of the treated patients die and it's back in "statistically insignificant" territory. Sure, but in most cases the company would still be able to anticipate which way things were going, whether an "approval" status is stable or still uncertain, and make a reasonable decision on how to proceed. And the FDA can provide clarity on how to handle such borderline cases. 

Friday, January 1, 2021

Soho Forum Debate on the Great Barrington Declaration

 I wrote a post a couple of months ago outlining a path toward herd immunity. Two days later, the Great Barrington Declaration (GBD) was released, authored by three epidemiologists (from Harvard, Oxford, and Stanford, so presumably they have some credibility). It outlines basically the same argument that I made: young, healthy people are relatively robust to the virus and should be living their lives freely (there is something like a factor of 1000 difference in mortality for the youngest versus oldest Covid patients), while the older and more vulnerable among us should be sheltering. The freely mixing population will get a lot of cases of Covid and develop some kind of herd immunity, at which point the virus will dissipate and the elderly can eventually get back to normal lives. 

My feeling is that the opponents of the Great Barrington Declaration don't really have a case. As in, it's not even close. This recent Soho Forum debate between Martin Kulldorff and Andrew Noymer increased my confidence. Watch the entire thing. I was slightly surprised that that the debate was a tie. The exact proposition was:

Coronavirus lockdowns should be lifted and replaced with a targeted strategy that protects the old and other high-risk groups.

Kulldorff, one of the authors of the GBD, is in favor and Noymer against. Kulldorff was not as articulate as I'd have liked. His performance is slightly choppy, which might have something to do with his accent. But the substance of his argument is right on. Noymer's arguments were terribly disappointing. I was hoping Noymer would at least articulate a clear reason for all-inclusive lockdowns that include the non-vulnerable. Some kind of cost-benefit analysis or something. In previous posts I've laid out the three main reasons why I could be wrong. 1) The risk to young people, while statistically quite small, should worry us. Or 2) There is no reliable way to keep this  teeming mass of young people separate from the vulnerable. Or 3) There are long-term consequences of a Covid infection that aren't revealed in the death figures. I was hoping to get a more thorough treatment of these possible arguments. Maybe a philosophical defense of 1), which I regard to be innumerate or irrational. Perhaps a formal treatment of 2), which I also find unreasonable. (How many elderly people would even say, "Yes, I want my adult children and grandchildren to go to such lengths for my sake." Would you, if you were a vulnerable person in your waning years?) Maybe a thorough fleshing out of 3), based on known hangover effects of prior infection. (As I've said before, appeals to "unknown" long-term after-effects, which aren't strict extrapolations from known after-effects, are a form of Pascal's Mugging.) 

None of this was on offer. Noymer suggests replacing the term "lockdown" with "public health orders." So that's his solution: replace an ugly term for an ugly policy with a revolting euphemism. I'm always a fan of more precise language, but this seemed like a cynical deflection to me. Noymer also repeatedly cited a statistic from his home community (Orange County, California I believe) that attempted to quantify the risk to young people, implying that it's larger than the GBD people presume. Maybe I missed his point, but I was left wondering "Why not use nationwide or international figures?" Was he cherry-picking an example of a community with an especially high death rate for young people? 

Perhaps most bizarre, Noymer repeatedly emphasizes that you don't know for certain whether you're in the low-risk group or not. Which suggests he doesn't know how to think seriously about risk. What you don't do is note that there is a non-zero risk and then catastrophize that you could be a casualty. What you should do is quantify the risk as best you can for your demographic, and treat such a risk as you would any numerically similar risk. (As in, Is it a numerically large enough risk for me to worry about it at all? Are particular efforts to mitigate the risk worth it in a cost-benefit sense? Am I using a cost-benefit calculus that is calibrated similarly to other hazards I face in my life?) Of course there could be some unseen variable working against you. Some genetic predisposition that magnifies your risk tenfold, the sheer bad luck of getting a very high viral load, a weakened immune system due to stress (possibly due to severe social isolation). You don't throw up your hands and say, "Gee, I don't actually know if I'm in the 'probability of death = 1' group or the 'probability of death = 0' group, so I'd better assume the former." You treat unknowns using the concept of probabilities, with lower probabilities warranting less concern. Hazards with probabilities below some threshold should be totally ignored, and the same goes for probabilities that are beyond your ability to control. Someone who is so terribly confused about basic concepts relevant to public health (or so confusing that he leaves listeners baffled about his point) should have no influence on important public policy decisions. Their commentary should be ignored.

I apologize for being such a broken record on this issue. In fairness to myself, I've been posting much less frequently than I used to. These thoughts occur to me about ten thousand times as frequently as I write about them. I admit it's making me rather grumpy. I feel like I do a decent job of understanding contrary viewpoints. There are three main reasons for failing to do so. One is that you fail to seek out such viewpoints. The second is that you observe such viewpoints but the topics and arguments are too subtle for you to understand. The third reason is that the viewpoint is hopelessly confused or poorly defined. I don't think the first or second apply. I am positively swimming in the standard "everyone must treat this as a deadly catastrophe" narrative. Having listened to Noymer's blather for about 45 minutes, I can safely say it's not the second. What I am seeing is a refusal to think seriously about how to quantify and respond to risk. I think I am seeing bad arguments being back-fit to foregone conclusions, and it comes out looking like a confused string of non sequiturs. This is a deadly serious disease, which threatens some people very close to me who qualify as "vulnerable." It needs to be treated with clear-headed thinking. 

Inconsistency/Hypocrisy In Health Policy?

Our friends on the Progressive left often tell us what a dire catastrophe it is that so many people lack health insurance. Healthcare is expensive. So, the thinking goes, those without health insurance will not seek care when necessary, either because they flat out can't afford it or they are unwilling to pay steep prices out of pocket. Supposedly all this foregone health care leads to bad health outcomes and higher overall mortality. 

I have serious doubts about this story. Like I've said many times, the Rand health insurance experiment and the Oregon Medicaid experiment both failed to find any substantial health impact for the "treatment" group. (The treatment group being the group that got into a Medicaid plan in the Oregon experiment and the one that got essentially a zero deductible in the Rand experiment.) And this result is consistent with a lot of observational/regression studies showing the same thing. Put that aside and let's say it's a plausible story that "lack of insurance" -> "less consumption of healthcare" -> "worse health outcomes". (The first causal link is real, but the second is not, assuming the obvious interpretation of the Rand and Oregon experiments is the correct one.)

My question is: Where have these commentators been all year? Consumption of health care is way down, and it's not just nonessential stuff. People aren't just skimping on their annual check-ups. Some people are so afraid of Covid that they're declining to seek treatment for a possible heart attack (which, given enough examples, means some people are not getting treatment for an actual heart attack). There has been a disruption of cancer treatments. People with known cancers haven't been getting their treatments on time, and cancer screenings are way down, which presumably means fewer cancers are getting caught in time to treat them. People are more prone to dither instead of seeking treatment at the first sign of a stroke, which can be deadly. Rapid treatment can spell the difference between life and death for a stroke victim. Patients aren't making it in to see their physicians for prescription renewals that require an office visit. The reduced consumption of medicine is due both to the patients' fear of contracting Covid and initial lockdown orders that put a temporary halt to "discretionary" health services. (Jeff Singer has a useful discussion of the issue here.)

Mental health has taken a serious hit. This is likely more due to the lockdowns themselves than it is a function of disrupted health care, but both effects are in play. Oddly enough, the only "statistically significant" effect of the Oregon Medicaid experiment was the improvement in mental health for the control group, and this was touted as a kind of success. In the Oregon experiment, most of the improvements in mental health happened before there was time for any appreciable amount of health services to be consumed, which probably means the mental health improvements were mainly due to peace of mind about the ability to obtain health care. If that's the case, a lot of people have been living without that peace of mind for much of the past ten months. 

My own view is that Progressive commentators on health insurance are wrong about the health consequences of being uninsured. But I also think that the sudden, extreme lack of availability of health services this year has caused real health consequences. You can go to the ER with a heart attack and will receive treatment, insurance or no insurance. But if people are simply declining to go because they've been unduly frightened of Covid (or appropriately frightened, but at the cost of ignoring other hazards to their health), I would expect that to show up in aggregate mortality figures. Much attention has been paid to the excess deaths in 2020, which some are attributing entirely to Covid-19. I think the story will be a little more complicated as this unravels. I would guess that the excess deaths in April and May are primarily due to Covid, but disruption of health services may have become a more important causal factor later in the year. We will know more at the end of 2021, because the CDC publishes its aggregate "cause of death" data at the end of the next year (the Wonder database and the detailed mortality file that I have been analyzing for the past five years). But if your priors are "going without healthcare leads to bad health outcomes", you should be very upset about disruption of services in 2020. 

Where is the outrage? I'm sure there has been some commentary on this, and a motivated reader could flood the comments of this blog post with links to news stories. But I've been sampling from the standard news streams. This story should be a major scandal, but it's a barely audible whisper in the cacophony. Nobody wants to say anything that sounds like "We exaggerated the risks of Covid." Suppose we try to deliver a slightly subtle message to the public, such as, "Covid is indeed dangerous, but not enough so that you should ignore the early signs of stroke or heart attack, or forego routine checkups and screenings." I think the narrative crafters, our public health professionals and media folks, are paranoid that this will be heard as "Covid isn't really a big deal," by a news-consuming public that doesn't have any appetite for nuance. They also don't want to put a single arrow into the quivers of conspiracy theorists or malcontents who think that lockdowns are harmful. I think these policy makers and commentators need to contend more seriously with the ways they've been hurting people (even supposing that lockdowns and extreme caution are on net beneficial). To the extent that these are the same people who were telling us how deadly it is to be uninsured, they need to confront an inconsistency in their own thinking.