Monday, April 26, 2021

A Good Piece on "Long Covid"

Here is an excellent piece on "long covid" by Adam Gaffney, who describes himself as a pulmonary and critical care physician. He is remarking on the phenomenon of long-haul outcomes of a prior covid infection. The whole thing is worth reading. 

I've put down my thoughts on "long covid" repeatedly on this blog. I've said that this seems like the phenomenon often seen in policy advocacy. There is a technique for inflating the importance of a problem. It goes something like this. An advocate broadly defines the problem to include even minor instances of it, such that one gets the largest, scariest possible total. Then s/he offers the most extreme cases as examples of the problem being quantified (rather than offering, say, a typical example, or a random, representative sampling). I think that's what's going on with claims that long covid is a big deal. Yes, there are individuals with tragic long-haul symptoms. Scarring of the lungs, damage to the heart, and so on. We should supplement data on "death counts" with this information and provide useful context about how well survivors fare. But these severe outcomes just aren't typical. Read the piece. "Long haul" could simply mean someone is feeling "brain fog" for weeks or months after a covid infection. (As far as I know, I haven't had covid. But I've certainly felt "brain fog" in the past 13 months. Could this have anything to do with my regular working life being rearranged?)

Another point of caution is that some of these "long haul" health outcomes might have nothing to do with the prior covid infection. It's really difficult to assign a cause to something, whether it's a society-wide problem like rising crime rates or a personal one like chronic health issues. I've made the analogy before to doing an MRI for back pain. Often the doctor will look at such an image and find some insult, like a bulging disk, and use that to "explain" why the pain is occurring. But doctors find a comparable number of such insults on scans of completely normal people. It's weird to pick out some detail that's in the background of everyone's life and say, "This is the cause of your problem," given that the same condition fails to cause a problem for most individuals. Presumably some of these problems have something to do with people's lives being upended, their careers and futures wracked with uncertainty. It's going to be difficult to tease out whether long-term health outcomes that are caused by covid itself or the intense social isolation and traumatic shift in people's daily routines. (Anecdotally, I've been hearing about people taking up bad habits this past year. "The covid-15" anyone? I suspect we'll see some of this show up in official statistics.)

This seems really important:

First, a cause-and-effect relationship is typically unestablished in these articles.  The Times article contending that mild and resolved COVID-19 infections can lead to extreme psychosis months later left out some important context.  According to one international study, the incidence of psychotic disorders is around 27 per 100,000 persons per year, which would suggest that in the US, there are tens of thousands of new diagnoses of psychosis every year.  In 2020, a solid proportion of those new diagnoses will have occurred in individuals with a prior coronavirus infection.  Obviously, although the temporal link will no doubt feel powerfully suggestive to patients and their doctors, this does not establish causality.

Another reason to question the causal link between the virus and some “Long COVID” symptoms stems from the fact that some, and perhaps many labelled with long COVID appear to never have been infected with the SARS-CoV-2 virus.  For instance, in his August Atlantic article, Yong cites a survey of COVID “long-haulers” that found that some two-thirds of these individuals had negative coronavirus antibody tests, which are blood tests that reveal prior infection.  Meanwhile, the aforementioned study published on a pre-print server, organized by a group of Long COVID patients named Body Politic that recruited participants from online long COVID support groups, similarly found that some two-thirds of the long-hauler study participants who had undergone serological testing reported negative results.

[Read the piece. He goes on to acknowledge that serology can be negative for people who in fact were infected, but argues that it's implausible that all of these "long-haulers" who tested negative actually had covid.]

This issue is an epistemic nightmare for me. For all I know, the "long covid" alarmists are absolutely correct. But I can't trust anything they're saying. The public health establishment has not been an honest broker of useful information (and that observation predates 2020). It's been marshaling whatever arguments and "evidence" it can find in favor of extreme caution and official government lockdowns. Should I treat "long covid" as a serious concern? Or should I treat it like so many strands of half-cooked spaghetti that have been flung in my face over the past year? Is this narrative being picked up and repeated because of it's inherent plausibility? Or is there a media-government complex that creates a demand for this kind of terror-porn? As Bret Weinstein likes to say, our sense-making apparatus is broken. The tragedy is that we really do need useful and accurate information to navigate a public health emergency like the present one. 

Wednesday, April 21, 2021

Inexcuseable Ignorance About Covid-19

 Here is a recent post by Tyler Cowen in which he quotes a comment, presumably with his endorsement given the context. Don Boudreaux, who like Cowen is an economics professor at George Mason University, has some disagreements with his colleague about how best to approach the pandemic. You can probably get the thrust of their disagreement from Boudreaux's recent post here. Specifically, Boudreaux is baffled (as am I) by commenters like Cowen who downplay the relevance of the age-mortality curve for covid-19. There is something like a three orders of magnitude difference in mortality for the youngest versus oldest cohorts. (Almost four orders of magnitude, according to this page from the CDC.) It would be shocking if this fact had no relevance whatsoever for advising which institutions to shut down, or advising individuals on what kinds of risks they should take. But Cowen plays this down like it's not even a thing. 

Boudreaux is responding to a recent Econtalk in which Cowen was the guest. I listened to the same podcast and was likewise scratching my head at Cowen's comments. I wanted to respond specifically to some of the remarks made in the post linked to above (first link):

It is simply not a tenable policy to oppose pandemic lockdowns on the premise that COVID-19 only negatively affects a certain portion of the population. First, the fact that COVID-19 disproportionately killed the elderly was not something that was readily apparent right out of the box, when the virus was spreading rapidly. Hindsight is 20-20.

The first sentence is a raw assertion, not really justified by anything that follows. It was indeed apparent immediately that this virus had a disproportionate effect on the elderly, and it left children almost untouched. The Diamond Princess cruise ship gave the world the closest thing possible to a controlled experiment. Some very good information on the age-mortality curve came out of that episode. Here is a link I posted to my Facebook page March 14, 2020, right around the time that schools were closing and everything was shutting down. From that piece: 

Of the 416 children aged 0 to 9 who contracted COVID-19, precisely zero died. This is unusual for most infectious diseases, but not for coronaviruses; the SARS coronavirus outbreak also had minimal impact on children. For patients aged 10 to 39, the case-fatality rate is 0.2 percent. The case-fatality rate doubles for people in their 40s, then triples again for people in their 50s, and nearly triples yet again for people in their 60s. A person who contracts COVID-19 in their 70s has an 8 percent chance of dying, and a person in their 80s a nearly 15 percent chance of dying.

So, no, this isn't a case of "hindsight is 20/20". We knew very early on that children were basically not at risk, and young people up to about 40 or so were at no more risk than from other seasonal viruses. At any rate, there's not excuse for someone not knowing that. This calls for a directed approach to risk mitigation, not society-wide lockdowns (voluntary or involuntary). Closing schools outright was a mistake, and it was knowable at the time that it was a mistake. (Certainly, children with at-risk adults in the home should have had the option of doing their school work remotely. I'll even say that anyone who didn't feel comfortable sending their kids to school for any reason, good or bad, should have had the same option. That's a very different proposition from saying everyone must do school remotely. Come to think of it, I've seen Tyler endorse the idea that schools are basically safe and should be reopened. How can one take such a position without acknowledging the age gradient?) 

Back to the comment that Tyler re-quoted:

Second, focusing solely on mortality is short-sighted given that approximately one-third of all people who get over COVID-19 suffer “long haul” symptoms that persist for months and may even be permanent in some. We cannot simply claim that the non-elderly have no reason to fear COVID-19.

I feel like I've talked this point to death. I wish they would be more precise about the harm of "long haul" symptoms. Do one third of survivors have permanent severe scarring of the lungs? Does having a persistent cough for a month that then goes away count one as a "long hauler"? If it's the latter, it's really not so horrifying, probably not "become a total shut-in" worthy. People experience long-haul symptoms from seasonal flus and colds, too. Ever gotten a sinus infection or persistent cough after a bad cold? I have. It certainly sucks, but it's not "turn the world upside-down to avoid" level badness. I feel like people who are making this point are combining common but minor after effects with severe but rare after-effects to get a scary-sounding total. I've spoken to a couple of friends, both about 50, who had covid and had some long haul symptoms. One had a cough that took two months to clear up, the other said he'd been free of asthma medicine for ten years but now has to take it again. Those are pretty serious after effects, and I would personally take precautions to avoid them. But I just don't see those harms as warranting the extreme measures we're taking. 

The commenter next tries a war analogy:

So far, COVID-19 has killed more Americans than we lost in World War II, and it took the war five years to do what the virus did in one year. Even though the majority of the deaths were 65+, these are staggering numbers. Losing well over 100,000 people under the age of 65 in one year alone is nothing to sneeze at, and that’s with lock-downs and other harsh measures being taken. A “let them live their lives” approach would doubtlessly have escalated those numbers greatly.

I've always found this to be a pointless exercise: comparing the death total from some kind of social problem or public health crisis to the death toll from a major war. It's not even a remotely useful comparison. Cancer kills about 600k people a year. Should we have a society-wide mobilization of resources to fight cancer? Probably not. That depends on how responsive the problem is to our proposed policy "fixes". It's such a confused comparison, and yet I see it all the time. Deaths from disease are to a large degree unavoidable and unresponsive to public policy. Deaths from war are, in some sense, a price that a society (or its government anyway) has decided to pay to avoid some greater evil or to stop a looming threat. (Of course wars are often terrible blunders, but WWII is probably the best historical candidate for "involving ourselves in war to prevent an even greater number of deaths.") Death totals from disease and death totals from war just aren't comparable, and there are no sensible policy implications that follow from noticing that this number is bigger than that number. 

The line about "losing over 100,000 people under the age of 65" misses some important nuance. I don't understand where this cutoff of age 65 comes from. When virus "optimists" like myself mention the age gradient, virus "pessimists" start talking about numbers (or worse, individual cases) of people below age 65 dying or having serious complications from the virus. There is an age gradient, not a cutoff. People age 55 are at a greater risk than people age 45, who are at a greater risk than people age 35, and so on. People who talk about the age gradient as having policy implications (and how could it not?) are implicitly acknowledging the deaths at all ages. There is simply nothing special about the number 65. Perhaps more importantly, this glosses over the issue of comorbidities. Yes, there are younger people who die of covid-19. The vast majority of them have some kind of pre-existing condition that makes them vulnerable. If there are identifiable conditions that make us many times more likely to die from covid-19, that probably has policy implications with respect to "focused protection." (Again, how could it not?) 

I put this all down a couple of weeks ago when I was feeling annoyed with Tyler's flippant remarks, but then Don Boudreaux and Dan Klein ably responded. I hope to retire from this subject. Death counts from covid are falling and the vaccine rollout appears to be a huge success. Hopefully it will be a non-issue soon. But the matter of future policy implications looms large. "Who was more right?" is an important question, not merely for ego-stroking and bragging rights. Some of these issues really do need to be settled, because they will come up again the next time there is a major pandemic, or even the prospect of one that fails to materialize. 

Wednesday, April 7, 2021

Expert Opinion Is Not Science

I have been extremely disappointed by the way "science" and "expertise" have been brandished in the public discourse, and this past year has provided many atrocious examples of these concepts being abused. The implication is always that "science" is an infallible, objective approach to learning the truth. Not just any truth, but the immutable, undeniable, irresistible truth. One must merely "do the science" and such truth emerges fully-formed from as an output of the truth-assembly algorithm. Often this naive approach to thinking about truth is done by people who would feel out of place, even lost, in an actual science class. In other cases, it is indulged by people with the highest levels of education, possibly abusing their credentials to pretend their opinion carries more weight than it does. (I am connected to many Ph.D'.s through social media, who I know personally from my grad school days. I'd say many of them appreciate the finer points of philosophy of science, but it's shameful how many of them are content to play the role of the sneering professor.) There are many important questions that need to be answered in order to combat the pandemic. How effective are face masks at preventing the spread of disease? What are the benefits and risks involved with available vaccines? Did the virus emerge from one of the labs in Wuhan that was studying coronaviruses in bats, or did it emerge elsewhere? It simply will not do to declare an answer to such questions and call it "science" or "fact." And it is completely inadequate to line up some experts to give the same answers to these questions and call that "science." (As if these experts are doing "the science" in the background, which is impenetrable to outside observers, and just delivering the punchline.) Answers to such questions can be known with greater or lesser confidence, and there exists a right answer in some cosmic sense, even if it's unknowable. But such answers are ultimately the opinions of human beings and should be understood as such. I think it's shameful the way that tech platforms are declaring themselves the arbiters of truth, taking down this or that video, selectively "fact checking" content, and tagging borderline content with warnings and qualifiers. The denizens of social media are no better; I see people at all levels of education on my FB and Twitter feeds, speaking as though there is this shortcut to the truth. It's more about narrative control than it is about truth. 

If people are treating science as if it's literally like a high school science class, where you memorize facts and regurgitate answers to various test questions, then this is a huge mistake. Facts are certainly needed. You need to memorize the various entities and steps involved in the Krebs Cycle if you want to understand what's happening. To do chemistry, you need to memorize the number of bonds formed by common elements. And the theory of evolution by natural selection would be pretty uninteresting if you didn't know some examples of related species emerging from common ancestors, or if you weren't able to discuss the fitness advantage of a particular animal behavior or appendage. But the facts themselves are inert. They don't do anything except in light of some kind of theory. Saying "Proboscis monkeys have huge noses, isn't that cool!" isn't really doing science. Nor is it scientific to pronounce on the status of Pluto as a planet. That is the mere memorization of facts. (And the Pluto example is a mere labeling question, hardly even a "fact," certainly not an important truth of any kind.) An expert will know all of the relevant facts. They will be able to tell you the mass and orbital characteristics of Pluto. They might be able to tell you the mean and standard deviation of proboscis monkey noses, the sexual dimorphism, the differential success in mating of larger-nosed individuals compared to less well endowed males. It's generally fine to take an experts command of the facts at face-value. But to "do science" one would have to integrate these facts with some kind of theory, such that they militate for or against some hypothesis. Expertise does not entitle one to say, "Proboscis monkey noses are large due to runaway sexual selection," and be uncritically believed by a listener. And a non-expert who merely memorizes such pronouncements and platitudes isn't scientifically literate. One must engage with the theory and be able to consider competing hypotheses. "What if the large noses are functional? Maybe bigger noses are better for sniffing out fresh fruit or fertile females, such that there is a fitness advantage even independent of the female monkeys' preference for big noses. Maybe the female preference is chasing a real advantage." An expert could then marshal various facts for or against this competing hypothesis. Maybe nose size has no correlation with olfactory sensitivity, and the smaller noses of female proboscis monkeys (which are still large for a primate) call this into question. (As in, "Wouldn't this mechanism make female noses grow just as large as male noses?" The Wikipedia article on proboscis monkeys suggests that larger noses means louder mating calls, and suggests females "prefer louder vocalizations". Is it female preference alone, independent of any functional value? Or are louder calls simply more likely to be heard, thus bigger-nosed monkeys are simply more likely to be noticed amid background noise?)   

This isn't an article on proboscis monkeys. The specifics given above could be wrong in some important way (amateur or professional primatologists, feel free to comment), but that wouldn't matter to the point I'm trying to make here. Experts can be the memorizers and guardians of facts, but facts are small, tightly circumscribed nuggets of truth. E.g. the fact of a monkey's typical nose size, or the fact of a particular animal behavior. Broader statements about how the world works are not "facts." Statements such as "This appendage was created by natural selection via runaway sexual selection" or "The adaptive purpose of this particular animal behavior is X" are not facts. These are hypotheses, narratives that attempt to integrate disparate facts into a tidy whole. Of course, experts are free to have opinions about such things. They may even be entitled so some degree of deference when someone skeptically challenges their conclusions about such matters. They do, after all, have the relevant facts at their fingertips, and their academic training has probably led them to think carefully about competing hypotheses and reach a conclusion. But it would be scientific malpractice for them to conduct a class in which the students simply memorized their conclusions and recited them back. 

I feel like many commentators are attempting to skip the hard part, where the actual thinking takes place and the engagement with competing hypotheses is done. There is an over-reliance on the opinions of "experts" by lazy journalists and the even lazier consumers of journalism who can't be bothered to crack a textbook or do a literature review. While it may be reasonable to take some sort of expert consensus as a starting point, it is wholly inadequate to simply chant "Expert consensus!" at someone who raises a plausible competing hypothesis or counter-narrative. The reliance on experts stems from something like the following thought process:

The expert has all the relevant information inside his head. He has integrated this data into a coherent whole and has already done the work of ruling out competing narratives. Counter-narratives must thus be coming from ignorant laypeople, who don't have a command of the relevant facts. Or they come from cranks, who have the facts at their disposal but whose reasoning ability is deeply compromised.

Again, this is okay as a starting-point, but it fails to incorporate any kind of error correction. It insists that the error-spotting-and-correcting process has been done, and the output is somehow infallible. I think there is an even stronger version of the above story, where the process that leads experts to their opinions is shrouded in mystery and inexplicable to outsiders. It looks like the following:

Experts have much more than a simple command of mere facts and data. They have deep insights that cannot be shared with amateurs or explained to a lay audience. Their internal process for integrating facts and data into a coherent worldview is ineffable. A single sentence spoken by such an expert cannot be unpacked without years of academic training. Every word or phrase they speak must be defined and contextualized. The venture of analyzing their plain-spoken pronouncements is doomed from the start. One can no more "explain" expert opinion to a lay audience than one can explain how a human being can recognize another human's face. It would be akin to "explaining" how a long, complex piece of computer code generates it's output, or how a neural net model with thousands of fitted parameters generates predictions (say, on which picture is a "cat" or "not a cat"). There is simply no shortcut that would make "expert opinion" legible to outsiders. It is a black box whose output must simply be taken on authority.

I don't know if anyone would literally sign up for this version of the "believe expert opinion" story, but some commentators are implicitly appealing to it. If someone presents a reasonable sounding challenge to expert opinion, there should be some sort of coherent response available. "Why do climate proxies (tree ring thickness,  concentrations of various isotopes, etc.) fail to predict temperature in the only era (1970s to present) where we have extremely high-quality data for both? Does this in any way compromise the attempts to reconstruct the paleo-climate using these proxies?" There can be a perfectly good answer to this question, and maybe its premise is not even true. But the answer should be something that's consistent with the scientific narrative that the experts are telling us. The answer shouldn't be, "Look, we're a bunch of really smart people who spend all of our time building very complicated climate models. We spend our waking hours pouring over the output of these models, checking them against historical observations, refining the models, and gaining insight. The explanation would be too 'mathy', and your attention span would waver. Even assuming math comprehension weren't a barrier, there is simply no way to coherently communicate the insights built up over decades of practice. Just take our word for it." Given the sneering response I often see to challenging questions (and the climate proxies example is one such), I'm comfortable saying that some people have indeed backed this stronger version of expert opinion. 

Where should I even start with this? If this story is true, then we're all basically fucked. There can be no interdisciplinary research in this world, because even adjacent disciplines could not speak to each other. Could we even have confidence that experts within the same discipline understand what each other are saying? How could we if every word spoken has a subtext of "ineffable knowledge"? How could we know that two experts speaking to each other are speaking the same language? How could we be sure that they are unpacking the same words with the same tools? (I'm not the first person to suggest that "experts" speak a different language than us normies while using the same words. Here is Bryan Caplan expressing his horror at the question of academic speech being impenetrable and giving the example of Paul Ehrlich basically admitting this was the case. Is Ehrlich a one-off, or is he just tip of a massive epistemic nightmare floating below the surface of public discourse?) 

I have a better story. Often experts can't give good answers to challenges by educated laypeople. They are sometimes unable to convince practitioners in adjacent disciplines of their "consensus" narrative. This isn't a sign of deep, tacit knowledge. It's a sign of group-think. Excessive specialization and navel-gazing causes experts to have a view that is too narrow. Frankly, and contra the "tacit knowledge" story, it's not necessarily all that deep, either. 

Let me make just a few observations that are self-evident to most people. 

There is often enormous social pressure to conform. A crankish-sounding theory might never get its day in court, because its proponents are silent. This is definitely happening in climate science. Scientists who underplay the role of CO2 in the planet's recent warming (there are very few who doubt it entirely, contrary to the "denier" sneer) are professionally ostracized, their credentials called into question. I recommend the book The Hockey Stick Illusion by Andrew Montford for a powerful telling of this story. Some hacked e-mails from 2009 uncovered an explicit conspiracy to oust climate scientists who were insufficiently loyal to the alarmist narrative. The alarmists discussed removing certain scientists as reviewers from various scientific journals. Now, the alarmists could be very right about the scientific questions. But one should be skeptical of any claims that come out of an environment that is this hostile to non-conformists.  See also this example of how one hypothesis about the extinction of dinosaurs became dominant. There is a great deal of bullying and social pressure, and it makes me doubt the asteroid hypothesis despite it's intrinsic plausibility. The fact that other hypotheses were systematically excluded from consideration should automatically make one skeptical of the "consensus" view.

(There was a recent Dark Horse Podcast with Bret Weinstein and Heather Heying that touched on this question. Bret is himself something of a climate alarmist. He's gone so far as to suggest that carbon emissions present an existential threat to humanity, or anyway that they should be treated as such in a "tail risk" sense. In a recent Dark Horse Q&A, he states that there is enormous pressure to conform in the climate sciences. He knows this, because whenever he or Heather say something "incorrect" he gets brutally corrected by a torrent of e-mails. And he quite wisely suggests that this pressure to conform must be even stronger for someone operating within the climate sciences. He doesn't take this as an opportunity to doubt his own alarmism, but he is definitely on to something.)

Am I getting this backwards? Perhaps scientific certainty comes first. All right-thinking people agree to the correct theory in fairly short order. Social pressure and bullying are last-resort remedies for crankish hold-outs. This is the story that "trust the consensus" crowd wants to tell us, but it's not really plausible. Some of the statistical arguments in the paleo-climate wars are legible to me, for example. The responses to Steve McIntyre's critiques by establishment people amounted to childish name-calling and credential-brandishing. So it always goes.  

Experts often disagree with each other! Have you ever heard of a "second opinion"? The notion that two doctors might disagree on the diagnosis and/or the optimal course of treatment? See the part just after paragraph 2 in this post, where I link to two studies on disagreements among doctors on cause of death determination. These are the most highly educated people in society, practicing their craft and making pronouncements on the cause of a person's death. They tend to not reach the same conclusion given the same set of facts. 

The previous section is slightly in tension with this one. Wasn't I just saying there is social pressure to conform? That we should expect to see hive-mind behavior, and now I'm saying look over here at all this expert disagreement? I think that comparing the opinions of doctors on individual cases gives us a useful insight into "expert consensus" regarding bigger questions. There is no social pressure to conform in the case of an individual cancer diagnosis or cause of death determination. The doctor is, for all s/he knows, making the first (and perhaps final) call on the question. Some bigger questions are just as complex as these individual determinations. Suppose we could mind-wipe all climate scientists or public health professionals, erasing all knowledge of colleagues' opinions but somehow leaving their practical scientific knowledge intact. If we asked such scientifically literate blank slates to reach a conclusion on the magnitude and causes of climate change or the effectiveness of masks in preventing the spread of pandemic diseases, I'm sure we'd get a broader range of opinion than what we're currently seeing. An implicit assumption of the "expert consensus is good" story is that these experts are all independently reaching the same conclusion. That would indeed be impressive, but the social pressure to conform means we can't assume independence of expert opinions. 

Besides, there are areas of academia where the experts disagree with each other on big, important questions. See the disagreement among economists on the effect of minimum wage on employment. This is still quite contentious, and nobody is credibly claiming to have "settled" the matter, given the loud and persistent disagreement from the other side. The disagreement between Richard Dawkins and Stephen Jay Gould comes to mind as well. In some of these cases I am knowledgeable enough to have an opinion, even to declare one side of the disagreement "clearly" wrong. But it would be a mistake to declare that "science" has rendered a definitive verdict. Science is the process of uncovering truth. Any verdict one could render, however sound, however confidently we hold it, however well it maps on to the cosmic truth, is ultimately just someone's opinion.

Political decisions are made first, "expert consensus" is marshalled in favor of desired policies. Often it is the policy tail that is wagging the science dog. Some kind of policy is crafted, and scientists are employed not as honest truth-seekers but as lawyers whose job is to zealously represent their client (in this case, the state). I've seen multiple examples this year of "consensus" turning on a dime to accommodate some new trend or some change in the received "wisdom". (See section 7 in the SSC post, on the change of the consensus position on masks wearing.) I found it shocking to read this paper, titled Disease Mitigation Measures in the Control of Pandemic Influenza, published in 2006. I came across it back in April of 2020 and noted at the time how much the consensus view of public health had changed in less than a month. Be suspicious whenever the party line whipsaws so dramatically.

It's also important to keep in mind that you can't get an "ought" from an "is". Even assuming the experts are all speaking in one unified voice and they are telling us the immutable truth, they have no right (in their capacity of subject-matter experts) to make "should" statements. It's not their business to create policy. I wrote here about the inadequate moral philosophy underpinning the public health establishment. Instead of just giving us accurate information about risks and benefits, public health "experts" appear to be deciding what decisions we should want to make. They then feel entitled to bend the truth, to tell us whatever noble lie we need to hear so that we'll make the "right" decisions. For example, the FDA grossly exaggerates the risks of vaping and excessively regulates vaping products. This makes smokers less likely to make the switch, either because they believe the FDA's misinformation or because the products are made unavailable or unduly expensive due to regulation. They would love for everyone to stop smoking and vaping altogether, but that's a value judgment, not an "expert opinion." It doesn't matter how well trained you are as a doctor or a biostatistician. Normative questions, questions regarding what people should do, are theological in nature. They are not scientific, and scientific expertise doesn't qualify someone to make such judgments. 

I have seen a similar bending of the truth this past year. There has been an odd refusal to acknowledge almost any mitigating information regarding the coronavirus. There are several things that we knew since the beginning of the pandemic, either because they are obvious or because we had good data even early on. I pointed out several here: past infection confers immunity, young people's risks are orders of magnitude lower than those of seniors, death certificates can contain errors (possibly in a systematic way that makes population-level statistics unreliable), the virus isn't really spreading in schools. The response to anyone who pointed out these truths has been some combination of apoplectic outrage and a denial either of the claim itself or (and this one must take some mental gymnastics) an admission of the raw fact but the denial that it has any policy implications whatsoever. The differential risk for young people, and the refusal to admit to the implications of this fact, is especially galling. The public health establishment somehow decided that young people shouldn't be socializing, even though the virus poses miniscule risks to them. Maybe they were thinking, "We can't tell young people to go about socializing like normal, because then they'll spread Covid to older individuals." Fine, say that. The advice to young people who are living with vulnerable individuals should be different from the advice to young people more generally. "You're not at risk, but if you catch the virus you can put someone else at risk," is a perfectly sensible, nuanced message. Perhaps the public health establishment was worried about long-haul effects of the virus. That's fine, they should say that, too. But they should admit that this is a guess. They should acknowledge that there are serious problems with treating speculative, unquantifiable risks as all-trumping factors in the decision calculus. (Sometimes called the "precautionary principle", this is really an example of Pascal's Mugging.) Such a guess about unseen risks isn't science, no matter how many subject matter experts agree that it's worth worrying about. 

Human thinking just isn't that deep, even thinking done by experts. Above I present a model of expert opinion-making as a deeply structured black box, similar to a neural net model, where tons of relevant information go in, some kind of learning process happens, and an opinion comes out. I'll suggest here that this just doesn't describe how anybody actually thinks. Our brains can only hold a tiny amount of  information in active memory at any time. There is no process analogous to a machine learning model that literally reads all the relevant information into memory at once. The mind is not like a regression algorithm (or neural net or gradient boosting machine) that passes over every available data-point; the brain's "best fit" algorithm (whatever it is) does not scrupulously ensure that the model of reality is informed by all information it has seen. I don't want to overstate my point. Experts are certainly better than nubes at digesting disparate information and manufacturing a coherent synthesis. They likely learn to hold large concepts in their working memories, concepts that would overload someone not familiar with their field. They have done the hard work of collapsing disparate details and long derivations into tightly packaged concepts. When they do chunking, their "chunks" are larger and more profound.  But human brains are imprecise and subject to all kinds of cognitive biases and blind spots. We should not model experts as "having thought it all through" in a mechanical sense. So we shouldn't grant them infinite deference as if their minds were impenetrable black boxes. We should expect them to present their reasoning and subject their analysis to public audit. If these presentations contain elementary errors of logic or math or factual mistakes, we should feel comfortable pointing them out, not simply content ourselves that the "experts must be right, just for the wrong reasons."

I want to stretch the machine learning analogy further. A facial recognition algorithm (or any other kind of classification algorithm, like a logistic regression or an xgboost model) doesn't just look at a picture and say, "That is Bob". A model trained to recognize photos of cats likewise doesn't say, "This is absolutely a cat." The model outputs some kind of probability. All the pixels of the photo go into the model, a lot of matrix multiplications happen, and out comes some number, say 0.85, representing the model's probability estimate that it's a cat. To get to "this is a cat", one has to set some kind of threshold, such as "everything above 80% is a 'cat', everything else is a 'not cat'." 

Does human reasoning do anything vaguely analogous to what machine learning does? Do expert judgments on important questions come with implicit probability estimates? Are statements regarding the effectiveness of mask-wearing or climate mitigation measures undergirded by probability point estimates? ("There is an 80% probability that what I just said is true.") Do these point estimate come packaged with confidence intervals? I'm going to say "not really." Unless they are trying really hard, people (even very smart people) think pretty shallowly. They at best come up with vague impressions that they think are true, and they only compare competing hypotheses when explicitly prompted to do so. When a numerical probability is absolutely demanded of them, people will disagree about what is meant by terms like "probably", "surely" and "almost certainly." (I recommend Philip Tetlock's books on expert predictions, Expert Political Judgment and Superforecasting. Even experts are pretty bad at this whole prediction business when nailed down to a precisely stated prediction including a numerical estimate. There is a long section in Superforecasting on the inconsistent meaning of terms like "probably" when used by intelligence analysts. Presumably they aren't the only kinds of experts plagued by this imprecision.) 

I haven't yet touched on the requirement for experts to be multi-disciplinary. A public health official recommending a mask mandate isn't just calling on medical expertise. It's not sufficient to simply do a literature review, conclude that masks probably work, and then pronounce that a mask mandate is a good idea. That would require some knowledge of PPE production capacity, some economics to understand the costs of ramping up production in one sector (which logically implies reduction of capacity elsewhere in the economy), social psychology to understand compliance issues, and so on. It requires the ability to do a big cost-benefit analysis that accounts for many factors, and no one person has the expertise to do this. The expert's internal "black box" simply isn't accounting for all the relevant information. That's not to say public health professionals shouldn't even be trying, or shouldn't have opinions on such matters. They should. But the public should be questioning and auditing their pronouncements, not treating them like high priests of an unquestionable religion. Prod an expert, and you'll find that their black boxes can be opened, and the content is mostly legible. 

(Everything in the above paragraph goes equally well for climate scientists who make policy recommendations. These experts deserve some degree of deference when they're answering questions about the climate sensitivity, rates of loss of ice, and so on. But they are not qualified to make or pronounce upon policy issues, not in their capacity as climate experts anyway. Even leaving aside the "can't get an ought from an is" issue, policymaking requires knowledge of economics, cost-benefit analyses, existing and emerging technology, etc. A climate modeler is likely to be narrowly focused on their one area of expertise. Their internal "black box" is almost certainly not incorporating a deep understanding of economics and emerging tech. So they are unlikely to be able to say what is the optimal carbon tax, or whether a carbon tax is better than a cap-and-trade, or whether cheap nuclear fusion power will come within the next thirty years and make the carbon question moot.)

Let's have a better public discourse. Let's allow heterodoxy into the public square. Let's tolerate reasonable questions by informed laypeople, even at the risk of giving a few cranks a platform. If the experts are unable to convince informed outsiders of their consensus views, let's stop credulously deferring to them. I'm very tired of seeing "science" brandished as if it's a shortcut to the truth, a "right answer" that can simply be bubbled in on a multiple choice exam. 

________________________________________

I wasn't sure where to put this into the flow above, but I wanted to offer an example from Jim Manzi's excellent book Uncontrolled. Manzi describes a situation where a head of state is soliciting expert advice. In one case, he asks a historian for their opinion on current geopolitical topics, drawing on the historian's knowledge of analogous events from the past. In another case, he asks a nuclear physicist about the prospects of a rival state developing a nuclear weapon. Manzi suggests that it would be unreasonable for the politician to supplant his own judgment for that of the physicist. But it would be irresponsible not to second-guess the historian. Both are examples of "expert opinion." But in one case, there is a "right answer". The rival state's nuclear-readiness is dependent on things like the quantity of fissile material available and the sophistication of their technology. It's basically an engineering question. But the historian's opinion is intrinsically wrapped up in the individual's ideology and internal biases. Which player in the current ordeal is analogous to the "bad guy" in the historical anecdote? Which historical episode is the right one for comparison? Is there an overlooked counter-example? Is the historian hobby-horsing with his favorite pet theories? When we ask pure scientists to opine on government policy or big not-strictly-factual questions beyond their immediate area of expertise, we are converting them from pure scientists to historians. They need to learn to take a step back and say, "I'm no longer speaking as a scientist, where I am truly an expert and deserve some deference. I'm straying into the humanities. I'm doing social psychology, historical analogizing, and moral philosophy, subjects I am in no way superior to any of my listeners." 

I wanted to mention another example of "experts" insisting that an outsider merely misunderstood their field and wasn't competent to critique it. The recent "hoax" papers by James Lindsay, Peter Baghossian, and Helen Pluckrose were collectively an amazing piece of scholarship. Lindsay describe the impetus for the papers in a recent podcast. I recommend listening to the entire thing. Postmodernists and critical theorists kept telling him that he "just didn't understand" their philosophy. So he, Baghossian, and Pluckrose set out to publish utter nonsense in their "academic" journals. They succeeded. And Lindsay is very clear that these articles weren't "hoaxes". They used exactly the tools and the academic language of the scholars he was criticizing. The trio's spoof articles were really indistinguishable from the other drivel published in those journals. It was proof that he and his crew did indeed understand the philosophy they were critiquing. They offered a Trojan horse, and postmodern journals scooped it right up.