Wednesday, December 29, 2021

Against “Kinder, Gentler” Socialism

There have been some recent attempts to resuscitate and rejuvenate the dead ideas of socialism. Despite its having been thoroughly refuted by the experience of the 20th century, advocates are always appealing to some slight variation on the basic concept that's "never been tried." You can read some of my recent posts as pushing back against these attempts to redeem a fundamentally misguided idea. 

In this post on worker ownership of the firm, I'm arguing against a thread of socialism that downplays state control of production and endorses worker ownership of the firm. As I read it, this position is actually a substantial retreat from socialism as it was initially conceived, an attempt to "moderate." Or perhaps it's just and attempt by crypto-communists appear moderate to onlookers. It's as if they recognize socialism in actual practice (as in literal state control of the means of production) has a horrendous track record. "No, no, we're not endorsing that." Ben Burgis and Richard Wolff are attempting to achieve as much socialism as possible through voluntary arrangements, while still not shying away from using the machinery of the state to mold the world toward their imagined utopia. I think their vision of an economy dominated by worker co-ops is extremely unlikely. Apparently the workers agree; the vast majority of workers in free economies are wage and salary employees. Very few workers get a substantial share of their income from residual claims against their employer's revenue (like you would as a partial owner). Even this kinder, gentler, warmer, fuzzier variant on socialism is a terrible idea.

In this post I explore the experience of the Israeli Kibbutzim. This was an institution of private socialism that started strong at their initial founding and began to decline. The critiques of such a system that a basic econ 101 analysis would warn you about began to materialize and take a bite. If "kinder, gentler" socialism were a viable option, they would have grown rather than shrank. The obvious incentive problems and brain drain took their toll. Despite reforming themselves in a pro-market, pro-property direction (hiring outside firms to run their commissaries, giving members the right to own more private property and leave the Kibbutz with it, etc.), life outside the Kibbutzim was more attractive. 

I see people like Burgis and Wolff as misstating the historical record, or simply not dealing with it. They subscribe to a vision of the world that I don't recognize, where companies have "power" over employees and customers. (They don't have "power." They can only offer a thing for a price, which the customers and workers can freely take or leave.) They fail to recognize the massive improvements in living standards over the last two centuries, or at any rate they fail to attribute those improvements to private enterprise. So they end up inventing solutions in search of problems.

Mao and Stalin were mass murderers. I'm glad that today's defenders of socialism at least see a need to distance themselves from them and say their program is a different thing. This is a kind of progress. But the problem isn't just that these people were big meanies. State socialism failed to deliver the goods. If the problem were merely that corrupt, evil people took over the machinery of the state, we still should have observed superior economic growth (with the proceeds going to the corrupt rulers rather than the workers more generally). No, the problem with socialism is the incentive problem. People treat communal property like trash (compared to how they treat their private property). People don't work so hard when their salary doesn't depend on their productivity. Those problems don't go away when you retreat to a gentler, more voluntary form of socialism. I'm happy to see experiments in communal living arrangements, and I think some version of this can succeed in a tightly knit community of very dedicated individuals. (The Kibbutzim falls just shy of this realization, but comes close.) I don't want to over-analyze the motives of modern defenders of socialism, but I see them as not willing to let go of something when history has given us a clear verdict. I detect a desperate clutching to whatever variant of this idea remains "untried." They've retreated to a superficially defensible enclave of idea-space. But they're trying to defend something that's fundamentally indefensible. 

__________________________________

I want to apply a "line in the sand" test to some of these people. As in: Is there a line they won't cross, where the economy becomes "too socialist" or the government exerts too much control over markets? Is there theoretically a point where they would say, "Nope, this is too much socialism. We need more economic freedom and private incentives." If not, they are crypto-communists masquerading as sensible moderates. They want whatever amount of socialism that they can get away with. If they got what they asked for, they would simply push it further in the same direction. I often sense that there is no limiting principle. (Not just w.r.t. socialists pushing socialism. The answers to questions like "How much should we tax cigarettes?" or "How much should we pay school teachers?" always seems to be "more", without reference to the current level or recent trendlines.) Maybe most of these gentlefolk have a limit in mind, and I'm being paranoid for entertaining this hypothetical at all. 

Study on the Origins of the Opioid Crisis Published

 Almost two years ago, I wrote this post describing a pre-print paper on the origins of the opioid crisis. It was an interesting attempt, but I think it was fundamentally confused. For example, it supposedly finds that differences between states in opioid overdoses are driven (in part) by law differences. (So called "triplicate states" required doctors to fill out forms in triplicate when prescribing narcotics, other states did not.) Their punchline is that it's not the law difference itself that's driving the difference, but rather Purdue's response to the law. Specifically Purdue's decision not to market as enthusiastically in triplicate states caused the difference, according to their analysis. If triplicate states would have gotten the same results in overdose deaths if Purdue had decided to give them equivalent marketing attention, then what are the policy implications? The paper tries to suggest that other states should have had triplicate laws, but their own argument suggests that that strategy would have had limited success. Suppose all states were triplicate states. Doesn't that just mean Purdue would have said "Screw it, we're going to market just as hard everywhere."? Their causal story might be true, but their attempt to draw policy conclusions from it is hopelessly confused. 

There are other major problems with the paper, which I outlined in my earlier post. I have never seen any of the opioid alarmists grapple with the fact that there is no trend whatsoever in the number of users or addicts. An important link is broken in their chain of causation leading from opioid marketing to opioid overdoses. And they never seem to acknowledge that the "opioid epidemic" is just a continuation of a pre-existing trendline stretching back to 1979, at least. Also, by all accounts Purdue had a tiny market share of total opioids prescribed. So it's weird to single them out for blame.

I can't seem to find the published version of the paper online, so I can't see if the final version addressed my concerns. If I manage to get my hands on it, I'll do a follow-up post pointing out that my criticisms were answered. 

Wednesday, October 20, 2021

Overall Mortality and Drug-Related Mortality by Age, 1999-2018

With Covid-19 hitting the world in mid-2020, I was having a lot of thoughts about how to measure public health crises. Covid deaths were numerous enough to show up in the all-cause mortality figures. This basically ruled out the possibility that Covid was far less severe or far less common than the official statistics implied. Cause of death misattributions could conceivably overstate deaths in one category while understating those in another, but they couldn't drive an upward trend in overall mortality. The excess deaths look particularly stark if you break down the analysis by demographic and look at those groups especially hard-hit by the virus. The all-cause mortality figures ruled out a lot of kooky conspiracy theories and narratives that minimized the severity of the virus. 

I thought this fixation on all-cause mortality could be useful in other contexts. Of particular interest to me was drug-related mortality. I believe that there is a huge misattribution problem in drug-related causes of death, particularly with regard to prescription opioids. If someone dies with a syringe in his arm, maybe the cause of death is unambiguous in that case. But millions of people are walking around with blood levels of opioids that would kill a naïve drug user. Surely these people sometimes drop dead of unrelated causes, like a sudden fatal heart arrhythmia, and the medical examiner wrongly marks it as a drug overdose. If drug overdoses (as marked on the death certificate) are a large proportion of these deaths, but the all cause mortality doesn't change much, it raises suspicions. Of course, other causes of death could be falling at the same rate that drug overdoses are rising. It's certainly possible. But it would look really suspicious. 

I downloaded the all-cause mortality by year, age, race, and gender from the CDC website here. I appended total drug deaths based on my decoding of the large cause of death file (which I've reported on for several years, see my work up of the data for 2016, 2017, and 2018) . Since the opioid epidemic is disproportionately a white male phenomenon, I limited the analysis to that demographic. (Males have something like double the drug-related mortality of females, so the effect of an opioid crisis in their all-cause mortality will pop out more if we limit the analysis to them.) Keep that in mind for all charts and numbers below, the dataset is being filtered for white males. Many of my observations stand without doing this filtering; I did the analysis both with and without. I'm sharing the filtered version in this post. 

Another thing to keep in mind is that prescription opioid deaths dominated the trends in drug-related deaths from 1999 to about 2010. After 2010, heroin and synthetic narcotics like fentanyl began to dominate the trends in drug deaths, and prescription opioid deaths tended to flatten out. I will refer to these periods below and try to justify my claim that these are two different epochs. 

First I want to get a feel for the age distribution of drug-related mortality. Below is a density plot, which basically shows the distribution of deaths in each of three years. It looks like the distribution shifted to slightly older between 2000 and 2010, then younger in 2018. The average (not shown) bounces around between 40 and 42.5 over this time period, which doesn't sound like much. But the distributional shift kind of maps onto a story of prescription opioid use rising in the 2000 to 2010 period, being more pronounced in the 40+ demographic (the modal drug poisoning is just shy of age 50 in 2010!). Then the use of recreational heroin and fentanyl rises in the 2010-present period, hitting younger users harder. Modal age of deaths is around 30 in 2018, but do note the fatter tail, extending more into older ages than in 2000 or 2010. The mode can shift a lot without the overall average budging much. Anyway, the story of pills being an older phenomenon and street drugs being a younger phenomenon makes intuitive sense to me and is consistent with what I know generally about drug use patterns.


Next I want to show what drug-related mortality is doing by age group over this time period. Below I am showing drug mortality per 100k for ages 20 through 59. It definitely looks like there is an upward march. Here I am not making a distinction that I usually make, in which I break out suicides and unintentional overdoses. I am simply including all deaths in which a drug was mentioned on the death certificate (so this actually includes some car accidents, drownings, and some other deaths that generally aren't counted in studies of the overdose epidemic, but which can plausibly be blamed on drug use.) Some show a more consistent upward trend over the period, others show some temporary plateaus. Between, say, ages 25 and 50, there is an evident spike in the 2010+ period, a very clear break in the trendline. This is obviously related to the explosion of fentanyl-related poisonings over the period. (The grey box on top of each plot shows the age of the decedent.) 


So what is happening to all-cause mortality over the same period? Here is what those numbers look like.


I found this odd. A major cause of death is rising, and yet it's only evident in the all-cause mortality numbers for, say, ages 25 through 40. The heroin/fenanyl epidemic is evident in the all-cause trends at ages ~25 to 45. Every age in this range is showing an increase at least in the last few years (say, 2015-2018). Some are showing an increase over the 2010 to 2018 period, which matches with the period over which heroin/fentanyl deaths were really exploding. The all-cause mortality patterns basically validate the idea that there's a serious illicit drug problem in the 2010 to present period. 

What about the period from 1999 to 2010? This is a time when overdose deaths were dominated by prescription pills rather than illicit opioids. The only ages for which there is an obvious upward trend over this period (granting some substantial reversals and plateaus) are maybe ages 24 to 32 (not that any of these endpoints are exact). At ages 20 and 21, there is actually a decline in overall mortality. For people in their mid-30s, the trend looks pretty flat. For ages 36 to 49 the trendlines slope downward for this period. This is weird. Something is apparently offsetting the drug-related mortality over this period in some of the hardest-hit demographics.

It's not like the drug-related deaths are just a rounding error, either. Many analyses (in particular I'm thinking of Angus Deaton's recent work) have suggested that drug-related mortality is causing an overall decline in life expectancy, which would imply that they're a substantial fraction of overall deaths. Indeed they are. Let's look at the actual numbers. 


Supposedly, drug-related mortality rises from ~5% of deaths to ~15% of deaths for 20 year olds over the 1999-2010 period. And yet there is no evidence of this in their all-cause mortality figures? For 33 year olds it rises from ~10% to ~20% of overall deaths, and yet the all-cause trend is flat? People in their 40s are mostly seeing improvements in mortality over this period, and yet they're also seeing substantial percentage-point increases here. Either something is offsetting the "opioid epidemic" (at least up until 2010), or the trend in drug-related deaths is spurious (misattributing deaths of other causes to opioids, say). Given the substantial fraction of deaths that are supposedly attributable to drug-related causes, something should be evident in the all-cause mortality for basically all age categories. But that's not what we're seeing.

I only have the foggiest notion of how to do a proper "excess death" calculation.* The exercise is particularly fraught when you have a moving target like this, with very different trends at different ages. I don't even think "excess deaths" is a well defined concept in this kind of statistical environment, where you expect deaths to fluctuate wildly due to unpredictable social trends. So I did something much simpler. I built an "actual versus counterfactual" comparison. The actual lines are the same as the all-cause mortality figures above, the counterfactuals are what we would have seen if everything else had stayed the same. In other words, what does it look like if non-drug deaths were locked in at their 1999 values and the only driver of overall mortality were changes in drug-mortality? Red is the true trendline, blue is the counterfactual that I have just described. 


There are a few lessons here. First of all, even if you're 100% credulous of the overdose numbers collected by the CDC (and sometimes broadcast by the merchants of moral panic in the media), your dire storytelling should be tempered by good news. "Of course, despite this horrible epidemic, of addiction, overall mortality is flat or declining for most demographics." Is it so hard to inform your readers and viewers of the bottom line impact on overall mortality? Second lesson, you shouldn't be 100% credulous of the CDC's numbers. You should concede that there is something to the notion that deaths are being systematically misattributed to drugs. The only demographics for which the opioid epidemic narrative fits for the 1999-2010 period are folks in their mid-20s up to about age 32. For all other ages, it is clear that something other than drug poisonings are driving all-cause mortality trends. A third lesson: disaggregate your data. There's a deeper story than what the overall population statistics are telling you.  

Next we'll take a look at very broad causes of death. Internal causes of death are things like cancer and various kinds of organ failure. "Natural causes." External causes of death are things like automobile accidents, drowning, suicide, homicide, and drug overdoses. Obviously this is a more general category than "drug related mortality," but trends in overdoses should be reflected in the trendlines for external causes. I was looking for evidence of misattribution. If the rise in reported overdose deaths is due to deaths from other causes being falsely attributed to drug poisonings, we would expect to see some cause of death that is falling while drug overdoses are rising. That's to say, we should expect that drug poisonings are "stealing" from some other category. I was looking for a set of trends that were mirror images of each other. Since it is abundantly clear that the increase in overdoses in the 2010 - present period is quite real, I will fixate on the 1999-2010 period when prescription opioids dominate the drug mortality trends. 

[Note that you can query the CDC's Wonder database for more or less granular causes of death. In this case I'm getting stats by the broadest categories: external causes, heart-related, lung-related, digestive system-related, cancer, and infectious disease. Then I'm grouping these into two very broad categories: internal and external causes of death.]

Below I have plotted the change in mortality by age between years 1999 and 2010 for two broad categories, internal and external causes of death. (Above zero means mortality increased over the time period, below zero means a decrease.) I am not seeing the mirroring that I was looking for. So there is no tidy story that, for example, "heart attacks are being systematically mislabeled as drug overdoses." At least not according to this view. 


Maybe a simple comparison between two end-points is glossing over a trend? Maybe we'll see something different if we look at the full range of years? To check for this, I plotted the internal and external mortality for the full 1999-2010 period, for each of several ages. 


Again, I'm not seeing any "mirroring" in which one trend is rising at the same rate as the other is falling. (For ages 45-55, it's true that the external trend is rising while the internal trend is falling, but not with matching slopes.) For ages 25 - 35 it looks like external causes of mortality are rising, which is in line with the opioid epidemic story. But this looks a little bit weird. The external cause trendline is flat for 40-year-olds? For 20-year-olds external causes are actually declining a little, even though (see above) drug-related deaths are apparently increasing? Are other external causes of death falling for them while drug overdoses are rising? 

What I have not done yet is look at a "drug-related" versus "external but non-drug-related" trend comparison. If those trends show the mirroring I was looking for, I think that will be a point scored in favor of Sam Peltzman's concept of risk homeostasis. That is, when people have opportunities to engage in risky behavior, they don't simply pile risk on top of risk on top of risk. They have an overall appetite for total risk. So as they engage in more risky behaviors in one domain, they hit the brakes in others. At least that's the hypothesis, at least on average and at the population level. (Of course I wouldn't claim that every single individual is such a Mr. Spock with perfect rationality and actuarially sound calibration of risks. But it makes sense that drug users might decrease their use of some drugs as they increase use of others, or avoid hazardous environments when they know they are going to be inebriated.) 

Something that struck me here was the degree to which external causes of death dominate at young ages while internal causes of death dominate at older ages. You can see from these charts how external causes dominate up to just after age 30, but internal causes are dominant by age 35. I knew that there was a very sharp gradient in all-cause mortality by age, but I hadn't before appreciated this break-out by cause of death category. 

Whatever you think about the quality of our nation's vital statistics, there is some good news here that doesn't depend on accurate labeling or reporting or cause-of-death attribution. All-cause mortality is falling for most demographics, probably driven by a combination of improvements in medical technology and (more speculatively) changes in lifestyle. Had you realized that mortality improved so much over the past 20 years? I suppose that's a silver lining, that even when something terrible is happening (like a raging heroin/fentanyl epidemic that is unambiguously killing tens of thousands of people), favorable trends can more than offset it. All this is prior to Covid, of course. Including 2020 in this picture would pretty drastically change the rightmost end point. That aside, we've endured about ten or fifteen years of panic porn about the opioid epidemic with almost nobody pausing to point out that overall trends were pretty favorable. 

_________________________

*I'm assuming you compute some kind of Poisson frequency based on the number of deaths, then use that to compute the confidence interval, then look at a different period to see if the number of deaths observed is outside that confidence interval. How many deaths are "excess"? Observed minus the top of the confidence interval? Observed  minus the expected number? Is this "expected deaths", the midpoint of the distribution, even defined if you know that it jack-knifes over a ten year period? 

Saturday, October 9, 2021

"There is no evidence..."

I'm growing tired of arguments and pronouncements that don the mantle of science but then proceed to make the most embarrassingly false claims. I often see it said that there is "no evidence" or "zero evidence" for some claim. This can't be literally true. 

Anything that can potentially shift your priors is evidence, even if it doesn't shift them by much. If I tell you I saw a leprechaun, that's "evidence" that there was a leprechaun, even if it's not very persuasive. There are very strong priors against there being leprechauns, given the laws of physics and biology. Suppose I claim to have seen one, and you've known me for a while. Maybe I've shown tendencies in the past to be overly credulous, or perhaps I'm a known bullshitter. Still, my isolated claim is a type of evidence. If I told you some other claim, say about my kids or about what was on TV the other day, you would believe me. So my word is evidence. Any claim I make has some degree of intrinsic plausibility, even if there is an overwhelming mountain of counter-evidence. My point here is that there are weak forms of evidence that still count. Observational studies aren't as good as randomized controlled trials, but they are still evidence. Logical and theoretical arguments aren't physical proof, but they are still evidence. The insights you gain from introspection aren't physical or objective, but they are still evidence. Without someone at least temporarily believing these weaker forms of evidence, we'd never get to the stage of generating new hypotheses and testing them. We'd never get around to generating the physical, empirical evidence that ultimately turns a hypothesis into a working theory. 

I'm hearing "no evidence" as shorthand for one of the following:

"There are some plausible theoretical arguments for the claim, but no direct factual evidence."

"There are theoretical arguments for the claim, but there isn't yet any direct factual evidence because this specific question hasn't been studied yet."

"There are strong theoretical arguments for the claim, but the empirical evidence is mixed."

"The claim in question has been studied thoroughly, and all of the factual evidence so far points to 'No.'" 

"There is evidence for the claim in question and there is also countervailing evidence against it. All things considered, I have adjudicated against the claim."

These are all fine responses when someone makes an implausible or disputed claim, but let's just be more honest about which thing we're saying. I'm seeing still-open scientific questions getting short-circuited. 

See this example linked to by a recent Astral Codex Ten links roundup:


The American Academy of Pediatrics states that there are "no studies" to support the concern about young children being unable to learn facial cues due to widespread masking. Mason's response is brilliant, basically saying that a hypothetical proposed study to determine the effect would be so obviously unethical that no Institutional Review Board would approve it.  (I don't know Mason's politics, but this is a pretty basic libertarian point that I have often made. Policy intervention, which we'd all agree would be unethical if we did them as experiments, are done all the fucking time, on the entire population, without bothering to gather information to determine safety or efficacy. The mandatory face masking thing is one such example.) The tweet by the AAP is extremely dishonest if it's implying that this particular question has been studied and evidence for it is lacking. Suppose the tweet instead said, "This question hasn't been studied, but we don't think it's plausible for the following reasons..." Maybe that wouldn't have been as punchy, but it would be far less misleading and much fairer to people concerned about this issue. 

Pretend for a moment that you're a member of an IRB and someone proposes this intervention to study the effects of hiding facial cues from young children for extended periods of time. Really imagine yourself in this person's shoes. Are you filled with unease that you might be harming some of the children in the proposed study? Perhaps not mere unease, but actual horror? Imagine green-lighting this and seeing the final paper. Picturing, say, Figure 3 on page 22 of the published paper showing that the treatment group acquired language less efficiently, would you feel bad that you allowed harm to come to some of these children? Would you feel dread that these effects might be permanent? Introspection is also a kind of evidence. Powerful evidence at that, and too often dismissed. There is some intrinsic plausibility to the notion that hiding facial cues from young children could harm their development, whether or not some particular question has actually been studied. I'm not saying that anyone who prefers to wear a mask should stop because of this. (Imagine yourself being on the IRB for another hypothetical study where people were discouraged from wearing masks during a pandemic. Isn't the world just full of trade-offs and uncertainty?) But we shouldn't be so dismissive when someone points to a plausible cost of a new behavior. Certainly we shouldn't dismiss it in the language of scientific certainty. This should get at least some decision-weight when considering policies (public and private) such as the masking of children or advisements to mask at home around young children. 

Here is a Cafe Hayek post from a few years ago, in which Don Boudreaux takes Paul Krugman to task for making a "no evidence" argument. Krugman absurdly claims:

There’s just no evidence that raising the minimum wage costs jobs, at least when the starting point is as low as it is in modern America.

This is nonsense. There are many studies that find substantial disemployment effects due to minimum wages. (The very best evidence for a strong disemployment effect comes out of the Seattle studies by Jardim et. al., though those weren't out at the time Krugman wrote his piece.) What Krugman is actually saying is that he's done the hard work of weighing the evidence for us and reached a conclusion based on the preponderance of that evidence. Even assuming Krugman had done this literature review (by no means a certainty), he would not then be allowed to say that the contrary evidence doesn't exist. Just that he personally finds it unconvincing, and he should say why. 

Sunday, September 19, 2021

New Study Comparing Natural Immunity to the Vaccine

There was an interesting study out of Israel comparing natural immunity to vaccine-induced immunity for SARS-Cov-2. Generally it finds that natural immunity is more robust than vaccination, though the vaccine does still seem to yield some benefit to people with natural immunity. And it's not a small effect, we're talking seven-fold or thirteen-fold, depending on how you do the analysis. Note the three different comparisons:

Model 1 – previously infected vs. vaccinated individuals, with matching for time of first event

In model 1, we matched 16,215 persons in each group. Overall, demographic characteristics were similar between the groups, with some differences in their comorbidity profile (Table 1a).

During the follow-up period, 257 cases of SARS-CoV-2 infection were recorded, of which 238 occurred in the vaccinated group (breakthrough infections) and 19 in the previously infected group (reinfections). After adjusting for comorbidities, we found a statistically significant 13.06-fold (95% CI, 8.08 to 21.11) increased risk for breakthrough infection as opposed to reinfection (P<0.001).

Also:

Model 2 –previously infected vs. vaccinated individuals, without matching for time of first event

In model 2, we matched 46,035 persons in each of the groups (previously infected vs. vaccinated). Baseline characteristics of the groups are presented in Table 1a. Figure 1 demonstrates the timely distribution of the first infection in reinfected individuals.

When comparing the vaccinated individuals to those previously infected at any time (including during 2020), we found that throughout the follow-up period, 748 cases of SARS-CoV-2 infection were recorded, 640 of which were in the vaccinated group (breakthrough infections) and 108 in the previously infected group (reinfections). After adjusting for comorbidities, a 5.96-fold increased risk (95% CI, 4.85 to 7.33) increased risk for breakthrough infection as opposed to reinfection could be observed (P<0.001) (Table 3a).

Overall, 552 symptomatic cases of SARS-CoV-2 were recorded, 484 in the vaccinated group and 68 in the previously infected group. There was a 7.13-fold (95% CI, 5.51 to 9.21) increased risk for symptomatic breakthrough infection than symptomatic reinfection (Table 3b). COVID-19 related hospitalizations occurred in 4 and 21 of the reinfection and breakthrough infection groups, respectively. Vaccinated individuals had a 6.7-fold (95% CI, 1.99 to 22.56) increased to be admitted compared to recovered individuals.

 Finally, they compare vaccinated plus naturally immune to natural immunity only

Model 3 - previously infected vs. vaccinated and previously infected individuals

In model 3, we matched 14,029 persons. Baseline characteristics of the groups are presented in Table 1b. Examining previously infected individuals to those who were both previously infected and received a single dose of the vaccine, we found that the latter group had a significant 0.53-fold (95% CI, 0.3 to 0.92) (Table 4a) decreased risk for reinfection, as 20 had a positive RT-PCR test, compared to 37 in the previously infected and unvaccinated group. Symptomatic disease was present in 16 single dose vaccinees and in 23 of their unvaccinated counterparts.

I don't quite understand why they do the matching. Shouldn't they be able to use the full sample and do some statistical comparisons in terms of rates? Is the matching just a clever way to avoid doing fancy statistics? (And why not, assuming you have enough data anyway?)

This really piqued my interest, because I've been hearing quite a lot of nonsense dismissing natural immunity to covid. What I've heard ranges from wild speculation to non sequiturs to unscientific rejection of what we all know about the immune system.  (It tends to sound like, "Meh, we just don't know yet!" As if we couldn't analogize from other respiratory viruses, even other coronaviruses.) A lot of people have been playing the role of naïve empiricists this past year and a half, pretending we can't know anything without direct observation of the specific question at hand. We actually have some powerful general scientific principles that can be applied here. Some are from the logic of evolutionary theory (we all still believe in that, right?). Others are from a basic, high school level understanding of how the immune system functions (and a basic understanding of how the mRNA vaccines work). 

Here's my reaction: The paper's conclusion is exactly what we should have expected, at least directionally even if we can't predict the magnitude. The vaccines are scientific wonders, but the ones that are most common (the Pfizer and Moderna mRNA vaccines) are incredibly narrowly tailored. None of the live virus is present in the vaccine. It's just a strand of mRNA, basically some biological instructions that tell your cells to "make me some spike protein." This teaches your immune system to build antibodies for when the real thing comes along. I don't know exactly how pure the RNA sequence is in the vaccine, but I would guess that every strand of mRNA in a given formulation is making exactly the same spike protein. Don't get me wrong, this is great. It means when you do encounter the "novel coronavirus," it's not completely novel. Your immune system has some familiarity with what it's encountering and can fight it off, often without any hint of illness (though obviously we're now seeing a lot of breakthrough infections). But compare that to having a live virus replicating inside of you for weeks. In this latter case, your immune system isn't going to be narrowly tailored to one particular version of the spike protein. It's going to cue in on other pieces of the virus. If the spike protein mutates and you encounter this new strain of the coronavirus, that's okay, your immune system can recognize other signals that your body is being invaded and ramp up production of antibodies. Also, given the amount of exposure you have to the virus and its various proteins in the case of a live infection, you should expect that your body would spend more time and energy building up antibodies. I'm at the limits of my understanding of the immune system here, but I would suspect someone who just spent two weeks fighting off a live virus would have built up more antibodies than someone who's had two quick jabs of mRNA. 

See this Nature article explaining why the Delta variant is so much more transmissible:

Shi’s team and other groups have zeroed in on a mutation that alters a single amino acid in the SARS-CoV-2 spike protein — the viral molecule responsible for recognizing and invading cells. The change, which is called P681R and transforms a proline residue into an arginine, falls within an intensely studied region of the spike protein called the furin cleavage site.

Sometimes people forget, or pretend to forget, that evolution is a thing. Evolution isn't like organic chemistry or knowledge of the immune system, where you have to know how actual, specific biological systems work (T-cells and such). Evolution has its own simple, mathematical logic, absent of any specific details. (Though certainly the details enrich one's appreciation of the concept.) Given that there are replicators trying to pass their genes into the future, and given that those replicators vary from each other in ways that modify their probability of success, we should expect some versions of those replicators to proliferate and others to die off. If there is a mutant form of covid that is good at evading the mRNA vaccines (say, by having a mutant version of the spike protein), we should expect that mutant to proliferate. Maybe someone who knows more about this can contradict me. Perhaps the Delta variant's spike protein is no more likely to evade vaccine-induced immunity than the Alpha variants, it's just that the mutation makes it more infectious in general? But it does seem like we're creating a world that would select for mutant spike proteins. Biologists should be standing up and declaring that what's happening with the Delta variant isn't a surprise. 

All of this has me thinking, Why in the hell are we talking about a booster shot of the same vaccine that was developed in January of 2020? Given that someone was able to develop a working vaccine based on first principles basically on their first try, where is the vaccine that's tailored specifically to the Delta variant? The lesson coming out of this past year-and-a-half is that it's relatively easy to tailor an mRNA vaccine to a new virus. So let's have that lesson inform vaccination policy (including the approval process for new vaccines and vaccine recommendations from the CDC). If we're seeing evolution in the direction of altered spike proteins, let's have a more robust ecosystem of vaccines. How about, if you've already had the mRNA double-jab, the recommendation should be to get the Johnson & Johnson vaccine (which is not an mRNA vaccine). Or how about variolation? Let's see a proliferation of attenuated virus vaccines. The mRNA vaccines were a great way to buy some time and protect the most vulnerable individuals (at least temporarily), but we should expect a rapidly evolving virus to outfox it. I'd also like to see population-level serology sampling to determine the prevalence of antibodies, and furthermore to determine what kind of antibodies people are getting. Which proteins are our immune systems zeroing in on? And can we use this study of natural immunity to inform the development of vaccines? Can an mRNA vaccine hold the instructions for multiple proteins? Perhaps for multiple variants of multiple proteins? You know, so 180 million Americans (many times that number worldwide) are not all susceptible to a single mutation on a single protein. Even if you're still a total covid-hawk who viscerally rejects the notion of letting the virus run its course, the Israeli paper should be informing your opinion of what kinds of vaccines to pursue.

(By the way, it seems that Geert Vanden Bossche alerted us to this possibility. See his interview with Bret Weinstein here. I can't quite buy his conclusion that we shouldn't engage in a vaccination campaign during a pandemic, at least I think that's what he's saying. It's like saying "We shouldn't use this life-saving medicine because there's a finite supply that will eventually run out." I think the correct take-away is that we'll have to keep adjusting the mRNA vaccines to new variants, or eventually switch to attenuated virus vaccines.)

In the early months of the pandemic, I repeatedly heard commentators (covid hawks all) dismiss the idea that naturally occurring immunity could lead to herd immunity. The same people would often insist that we could only get there through vaccination. (See this incredibly bad-faith piece in the Atlantic, and my commentary on it here. ) It was thoroughly confusing. Say you had a society of people with enough vaccination coverage that it had herd immunity. These commentators were apparently saying that if you swapped out the vaccinated people for people with natural immunity, the virus would come back and start spreading again? Or perhaps they were simply saying that the concept of herd immunity arose in the context of a vaccination campaign. This is a historical claim about the origins of an idea, but it's completely irrelevant to the claim about whether herd immunity from natural infection would work. Or perhaps if asked directly they would have conceded that, yes, a sufficient level of natural immunity would provide herd immunity, but it would come at too great a cost along the way? Or as a matter of historical fact, it had never happened? (Though wouldn't it happen in this case, with covid being so infectious? And wouldn't it be fine if the non-vulnerable, who basically experience it as a mild flu or cold or as nothing at all, all got the virus while the vulnerable were being isolated and protected?) No version of the "herd-immunity-from-natural-immunity-wouldn't-work" claim actually makes much sense. I think these people were actually so confused that they themselves didn't even have a clear idea what they were claiming. They had so little patience for the notion of simply tolerating the virus that they shut off and began lobbing whatever rhetorical fodder was within reach. They were jack-knifing from one idea to another without acknowledging the change in direction. 

I can't help but feel a little vindicated by the study linked to at top (I'll happily retract that statement if the result doesn't hold up, though). A lot of people were suggesting that natural immunity to covid didn't exist at all, or was very short lived, or at any rate we couldn't count on it for protection. I knew this was nonsense at the time. If there were no natural immunity at all then sick individuals would simply never recover, they'd just keep getting re-infected by the virus circulating in their body. (Like, what model of the immune system did these people have? You will eventually fight off the viruses that are inside your body, as I think they would have conceded. But then you'd promptly be reinfected if new particles of exactly the same virus entered your body from the outside?) This was the dog that didn't bark, as in they would have been shouting from the rooftops if they found substantial numbers of reinfections. But reinfections were exceedingly rare. If the result of this study holds up, I'd like to hear some kind of correction from this crowd. A big, fat, blubbering apology to the Great Barrington Declaration crew would be in order. The greater robustness of natural immunity means their prescription is even more attractive. 

All those people who are saying they don't need the vaccine because they've already had covid aren't wrong. The Israeli study suggests they'd cut reinfection risk in half by getting vaccinated, but that's on top of immunity that's extremely robust. Most people would probably think it's sufficient and see no need to dredge the depths for tiny incremental amounts of covid protection. There is some finite risk of undiscovered dangers with the mRNA vaccines, which I discussed in my previous post. I personally don't care for this "unknown unknown" type of argument, and I don't think the VAERS data on reported vaccine side-effects is showing a real signal. But I can respect someone who has a different cost-benefit calculation than mine or who reads the evidence differently from me. I have far less respect for the condescending attitude of the vaccine scolds. "Just get the jab, you backwards rube! Learn about the science!" Clearly the cost-benefit calculus differs for different people. It depends on their risk factors; if you're in a high-infection-fatality-rate demographic you should get vaccinated. If not, it may not be worth it. I have just enough reservations that I'm not super-thrilled about my young children getting the jab. (BTW, it is the official position of the U.S. public health establishment that very young children shouldn't be vaccinated. According to the CDC, as of this writing the vaccines are only recommended for children above the age of 12. Even if that changes tomorrow, the current recommendation is completely defensible.) Given that vaccine-induced immunity is going to wane anyway, given that they're not going to get very sick from the virus, and given that they're likely to encounter the it eventually, it's probably best that they encounter the real bug and acquire robust immunity while young. It's unlikely that the vaccine is "protecting" them in that sense, just delaying their development of a truly robust immunity. 

The Israeli study is just one paper, so I don't want to put too much stake in it until someone replicates it. If it fails to hold up, maybe I'll leave the post up but just strike through a bunch of the above text. That being said some of the points I made above are independent of this particular study's results. There is no question that natural immunity is real and at least comparable to vaccine-induced immunity. That's a substantial update compared to what people were saying last year. 

___________________________________

A further note on the matching, from the body of the paper. Here's regarding model 1:

These groups were matched in a 1:1 ratio by age, sex, GSA and time of first event. The first event (the preliminary exposure) was either the time of administration of the second dose of the vaccine or the time of documented  infection with SARS-CoV-2 (a positive RT-PCR test result), both occurring between January 1, 2021 and February 28, 2021.

And model 2:

Therefore, matching was done in a 1:1 ratio based on age, sex and GSA alone. Similar to the model 1, either event (vaccination or infection) had to occur by February 28, to allow for the 90-day interval.

I think this is a clever way to avoid having to run some kind of statistical model to adjust for different risks between groups. They could have done that, too, just for the sake of comparison. 

The experience of Sweden is instructive here. They eschewed strict lockdowns. We can infer that the virus was spreading through the population and a lot of people were developing natural immunity, and this all happened before the vaccine was widely available. They've mostly avoided this third wave of covid deaths. Some commentators have pointed out that Sweden compares unfavorably to its Nordic neighbors, but then again it compares quite favorably to Europe as a whole. 

Friday, September 3, 2021

Unknown Long Term Consequences

 I want to make a point about the topic everyone's worried about, but I don't want to unnecessarily freak anyone out. As in, if you've already gotten it and have suffered no lingering effects, you should probably not worry. But if you haven't gotten it yet, you might want to exercise additional caution due to the unknown long-term sequelae. Some have persisting complications, but most people appear to recover in fairly short order. Those appearances may be deceiving. There is simply no way to rule out severe long-term complications, because the long term hasn't arrived yet. The data needed to settle the question only exists in the future. 

I'm sure some of you see what I'm doing here. Re-read the above paragraph, but do a little Necker cube flip and pretend I'm talking about the vaccine instead of the virus (or vice versa if you read it the other way). I'm tuned in to media sources that are hyper-cautious about either the virus or the vaccine (yes, I consume media on both ends of the spectrum), often making this "unknown unknown" kind of argument. What surprises me is the symmetry. Evidence of harmful side-effects of the vaccine are pretty weak. Some people are making a big fuss about the VAERS (Vaccine Adverse Event Reporting System) dataset, which the CDC uses to collect information on adverse reactions to vaccines. Below is a screenshot. You can tell from the file sizes alone that there's something odd going on in 2021. We're getting an outsized number of reports, apparently due to the mass vaccination campaign.


Certainly someone should be looking into this. It's concerning, but it's easy to dismiss. In fact, I urge that we dismiss it by default until someone convinces us there's a real underlying signal in this noise. There are so many more people getting vaccines this year, and the age profile of those getting vaccinated skews older than what we've seen in typical years. There are simply far more opportunities for adverse health events to happen to someone who's recently gotten the poke (I should say, "to happen to happen to someone"), compared to previous years. This is naturally going to yield some spurious reports of vaccine reactions. I haven't done the analysis to say that this year's explosion in adverse vaccine events is plausibly attributable to spurious connections, but I think that it's safe to ignore this until a more thorough analysis suggests it's compelling. In other words, just as I don't think we should jump at every shadow, I don't think we should overreact to signals that are probably spurious.* We  should have some mechanism for dismissing such false alarms, not indulging costly counter-measures just because "they might be real."

That said, even if the "evidence" is convincingly explained away as noise, there will be people who cling to this "unknown unknown" alarmism. You technically can't disprove that there are unknown long term health effects. Even if negative outcomes don't manifest in the short term, they could show up later in life (say as a subtle but real increase in cancer or infertility or something). I think this is nuts. It's a Pascal's Mugging approach to risk management. "A problem exists because I claim it exists...oops it probably doesn't but let's entertain the very small risk that it does because it would be very bad if it does." Followed scrupulously, this leads us to spending all of the world's resources on mitigation for risks that aren't real. Let's have a real budget for mitigating tail risks, but let's keep that budget finite.

I would make the same argument for "long covid." When I first started hearing reports of long covid, they sounded like a vague smattering of very different symptoms that people can suffer from from a variety of reasons.** Brain fog (which I have often felt this past year and a half, but which I attribute to a dramatic change in my work environment and lifestyle). Lethargy (which I didn't experience, but which could likewise be caused by being home all day instead of going to the office...perhaps metabolic disorders from changes in physical activity and diet). Abnormal heart scans (which it turns out will show up in similar proportions in a random sample of people...that OSU study didn't even have a control group!). It reminded me of the observation that a doctor can look at an MRI for a back pain sufferer and attribute the pain to some abnormality (like a bulging disk), even though that abnormality is common in the healthy population not suffering any back problems. My reaction was to entertain the possibility, but to basically dismiss it as having any decision-weight. And I think that was the right call. (Here is a good piece on the topic by Adam Gaffney, which I riffed on here.) To be clear, there are people who have severe bouts of covid, who perhaps end up hospitalized, who survive but have some kind of permanent lingering effects from it. I've never doubted or denied that. But from early on I was calling bullshit on the notion that mild or asymptomatic infections were leading to serious long-term health problem. I'm not buying this "silent killer" view of long covid, that there is a huge amount of harm that's slipping under the radar, but poised to strike in the coming years and decades.***

Of course, someone could concede all the stuff from the Adam Gaffney piece. "Sure, most of those 'long haulers' didn't actually contract covid. Sure, there's a spurious signal, and a million ways to confuse noise for signal, so we should have been more cautious in raising the alarm. But...the long-term consequences are unknown! The long term isn't here yet!" 

Vaccine alarmists and long covid alarmists can both point to "evidence for" the thing they're worried about, and both can take refuge in this all-trumping appeal to unknown risks. I just want to point out the symmetry. I think the "long covid" alarmists are on slightly more solid ground in terms of the strength of their evidence. But the appeal to unknowable hazards is not dependent on and does not respond to evidence. 

________________________________________________

*Someone surely has a deep knowledge of the rates at which people experience adverse health events. It shouldn't be too hard to check the rates at which we're seeing adverse health events in the VAERS data. Who knows, maybe it's actually ten times as common as we'd expect. I heard someone give the anecdote of a man having a heart attack moments after getting his first shot of the covid vaccine. I'm sure that seems salient if it happens to you. But I'm just thinking, "Old people are having heart attacks all the time. Surely someone was going to have one in near proximity to getting their covid vaccine, and surely some alarmist is collecting these stories and broadcasting them."

**I feel a need to cite Scott Alexander's excellent recent piece on long covid. I think covid hawks could use his analysis to say, "See! Long covid is real!" And covid doves could use his analysis to say, "See! Long covid is way overblown! (Yes it's real, but that's not the point of contention.)" I think the long covid hawks are committing a logical fallacy that's a common advocacy technique: give a high-sounding number by using an inclusive definition that captures non-severe examples, then cite specific examples of the most severe cases. This is a way of insinuating that severe problems are more common than they really are. Alexander is overall more concerned about long covid than I am, but I want to applaud the very high quality of his approach. See this part commenting on a study of a large number of post-covid symptoms: "One flaw in this analysis is that it didn’t ask for premorbid functioning, so you can tell a story where unhealthy people are more likely to get COVID than healthy ones (maybe they’re stuck in crowded care homes? Maybe they put less effort into staying healthy in general?) But I don’t think this story is true - how come obviously plausibly COVID linked things (like smell problems) are significant, and obviously-not-COVID-linked things like diarrhea aren’t?" Emphasis mine. Also here: "An English team says there’s a Long COVID rate of 4.6% in kids. But there was a 1.7% rate of similar symptoms in the control group of kids who didn’t have COVID, so I think it would be fair to subtract that and end up with 2.9%. And even though the study started with 5000 children, so few of them got COVID, and so few of those got long COVID, that the 2.9% turns out to be about five kids. I don’t really want to update too much based on five kids, especially given the risk of recall bias..." Section 9, about post-viral symptoms for other common viruses, was particularly interesting. As in, "How common are mild carry-over syndromes in general?" Maybe these are just as common for typical cold and flu viruses, but we just don't notice them because they're less prevalent in normal times? Or we're inured to them because, like the viruses themselves, we've accepted them as part of the background risk of a normal life? We don't happen to associate them with a recent cold or flu because it doesn't occur to us that it might be the cause? 

***We've seen population level increases in death rates. What I haven't seen are population-level morbidity figures. I'd expect to see disease rates increasing for the population as a whole, with dramatic increases for younger populations. And the analysis should clearly separate out the effects of acute covid from long covid. The population-level spikes in mortality are noticeable. If long covid is real, and if it's as big a deal as some are claiming, the population-level spikes in morbidity should be out of this world. 

Wednesday, September 1, 2021

Mismeasuring Risk In Both Directions

Sometimes it's amusing to observe just how exaggerated are people's understanding of various problems. I once quizzed someone by asking them how much the earth has warmed since pre-industrial times. They said something like, "About 15 degrees." This is from an American context, so presumably they meant 15 degrees Fahrenheit. The real answer is more like 1.8 degrees. I don't know what kind of answer you'd get if you randomly polled people, but the subject of my non-random quiz is not alone in having an exaggerated sense of how much warming there has been. A literal "climate change denier" would be closer to the truth by saying zero. I've heard similar exaggerations for the amount of sea level rise that's expected in the coming century, citing a likely rise of several meters whereas it's likely to be in the tens of centimeters. (Larger projections exist in the literature, for sure, so you could cherry pick a large value and claim the mantle of "science." I've also heard outlandish projections of how soon Greenland's ice will be gone. Again, I don't know what a proper poll would show or how close it would be to the literature's best point estimate. But the catastrophic voices are louder than the moderates.) Again, someone saying "zero" would be closer to the truth than someone who says "six meters." 

I see the same phenomenon in estimating the threat posed by covid-19. Particularly when it comes to the threat it poses to young people, some of us (and I am including myself here) have been pointing out that the risk is very small. See this chart (which actually comes from an alarmist page, and which I cited in a recent post):




One could be forgiven for rounding the IFR for the 0-34 group down to zero and commenting that the risk is something that blends imperceptibly into the background of other hazards (like auto accidents and suicide). If I'm reading this chart correctly (partially gated), when polled, people in the under 34 group estimate themselves to have a 2% (!?!) chance of death conditional on contracting covid. (Original paper here.) There's something wrong when your risk calibration is off by a factor of 500. (That's 2% over 0.004%, but I should probably apply some kind of adjustment for the consideration that 18 and younger weren't represented in the polls. Even if I did that, there's no way their assessment of risk is anywhere near what it really is.) The institutions of public health should be absolutely ashamed that they've so thoroughly misinformed the public. A young person walking around thinking s/he's not at risk, as in a true "covid denier", is actually more correct than the misinformed young people captured in these poll numbers. 

(Unfortunately, it looks like the elderly are being irrational about their risk of covid in the other direction, saying that the mortality risk is lower than it actually is. In fact they see themselves as less at risk than the young people do. That said, their self perception of risk is way closer to the ground truth than the young people's.)

There may be some attempt to defend the catastrophic worldview by saying the grossly exaggerated values are stand-ins for expected values considering tail risk. Maybe they point to the right policies and mitigation responses, even though they're wildly off in terms of quantifying the problem? In other words, a few inches of sea level rise could actually be catastrophic, so we're best off thinking that this measure is much higher than it really is. Maybe 3℃ of global warning is really terrible, even if it sounds pretty mild. Maybe it's actually as bad as 10℃ sounds to the average person. Maybe the two or three orders of magnitude difference between the perceived risk of covid and the actual risk is a stand-in for some larger truth? Like, "Of course I'm not actually at risk, but I should act as though I am, lest I transmit the virus unknowingly to someone who's vulnerable." Or, "Considering the long-term effects of covid, I'm best off treating it as if it has a much higher mortality rate than it actually does." 

I think it would be astonishing if this misperception of reality just happened to give the right answer to some other question. It would be quite a surprise if overstating the degree of global warming by a factor of seven or eight yielded the correct policy positions. It's far more likely that people who are objectively wrong about measurable quantities are also wrong about the appropriate policy fixes (and here I mean public and private policy, as in government promulgated mask mandates and personal hygiene policies). We should certainly entertain tail risk and "unknown unknowns" when it comes to global hazards like covid-19 and global warming. But we shouldn't be misstating averages or inflating known quantities. Sure, simulate a scenario where 10% of young covid victims suffer "long covid," spell out the long-term costs in dollars, lost productivity, lost year of life, etc. Then weight that scenario with some kind of plausible probability estimate, which some third party could audit and critique. Don't fudge it by exaggerating the mortality risk by a factor of 100 or more. Maybe in some cases the tail risk is so compelling that it's worth extreme mitigation measures, even though the "average" scenario is pretty ho hum. We should be able to make that argument without distorting known quantities and misleading the general public. These distortions, which are common in catastrophic rhetoric, cede the intellectual high ground to the so-called "deniers." Deniers may have a simpler, dumber model of reality, but their error is usually bounded at zero. Mistakes made by catastrophizers, by contrast, often have no ceiling. 

______________________________

Then again, maybe we're just bad about thinking about risks in the 1% range. Maybe the young people in the surveys were basically giving an answer that sounded like a small number and mentally rounding down to zero, not realizing that a 2% risk of death is a pretty big deal. I recall Maia Szalavitz reporting that young people, when asked about quantitative hazards of drug use, tended to exaggerate by some huge factor (I think this was in her book Unbroken Brain). And yet they engage in drug use at much higher rates than their elders. That seems consistent with the exaggerated risks captured in the paper above. Still, there is something very wrong going on here. The public health establishment, if it's doing its job, should be correcting such hugely distorted perceptions, not leveraging them to make people do the right things for the wrong reasons. It should be telling young people that they can venture out and comingle with other young people (while still being cautious around the elderly and vulnerable). 

Saturday, August 14, 2021

Live Push-Back Against an HR “Microaggressions” Session

I was pleased to see someone push back against a "microaggressions" session in a recent meeting at work. It wasn't an in-person session, so I couldn't see people's faces or gauge their reactions. It felt extremely uncomfortable initially, but a number of other participants chimed in in support of the push-back.

The person leading the session wasn't being rude or laying on the material especially thick. It was, as far as I can tell, a pretty standard introduction to the concept of microaggressions. She gave the examples of 1) the only woman in the meeting being assigned the task of note-taker and 2) a colleague using a heavy accent when impersonating the voice of another (who was Indian...more on this later). 

A brave person piped up with some comments. It was, after all, supposed to be an interactive session. He wasn't rude. He stated his point very respectfully, which the organizer of the session acknowledged. He started by saying, yes, we should all be respectful of each other and avoid giving offense. We should give some thought as to how our actions and unthinking biases might be affecting our behavior. That much is common sense. But what does this "microaggressions" concept add to that? And what is the limiting principle on this concept? Do minor grievances really "pile up" in the minds of those who are micro-aggressed against? Isn't there some threshold below which these events just cease to register? Or become quickly forgotten? Don't we have a duty to charitably interpret the behavior of those around us, rather than assume a sinister thought motivated them? (Not his exact language; these are my own paraphrases and my expounding on what I heard.) In explaining how we should be forgiving of minor or unintended slights, he repeatedly used the word "grace," which has almost religious overtones. I'm not particularly religious, but I thought there was something classy (you might say graceful) about this use of language. It's perfectly fine to say, "Hey, this behavior bothers me" or "This thing is really a pet peeve of mine." But let's not invent this concept of a growing ledger of microscopic slights that add up to a substantial whole. Everyone experiences these. The typical reaction is to round the off, truncating them to zero. (Computationally speaking, you might refer to this as setting a high tolerance.)

Specifically he riffed on the organizer's example, asking if it was never okay to ask a woman to take notes. The organizer said of course that wasn't her intended take-away, you'd expect a task to sometimes be assigned to a woman by sheer chance. If it's a pattern, if it's always a woman, and more to the point the woman tends to be chosen even if there are more junior employees in the room, then maybe there's an unhidden assumption that "this is women's work" or "women don't mind doing these menial administrative tasks." Point well taken, but I don't know if you need the concept of microaggressions to get there. She also said that until recently, it's been the majority group who got the privilege of defining what is and isn't offensive. What we're seeing now is that other voices are recognized in that space. Again, this is a totally valid point to make, but I don't know if it requires the concept of microaggressions or if this is just common sense decency. Taking offense that your ethnicity is the butt of many jokes seems like a different thing entirely from minor perceived slights piling up over time. 

The person who spoke up was a white male. (I assume hetero white male, because he mentioned needing to be understanding of one's wife to successfully communicate and navigate relationships. It sounded like he was speaking from first person experience. He was pointing out that we're all dealing with different kinds of people all the time, and we're somehow navigating that space without microaggression seminars.) But several people with heavy foreign accents joined in and seconded his point. My employer has a worldwide presence, and even among American collogues the foreign-born are heavily represented. Very cosmopolitan in terms of demographics, and I was pleased to hear that many of them had a cosmopolitan worldview. 

I've heard about cases of wokeness infiltrating HR departments and inflicting terrible "training sessions" on employees. Racially segregated training, humiliating struggle sessions, instructions to "be less white" (note that Coca Cola denies using those training materials, though it was accused of doing so to much furor), explicit indoctrination with CRT. The session I attended was much milder in intent, and yet there was firm but polite pushback. I have no doubt there are some committed fanatics trying to infiltrate the culture by inserting themselves into the bureaucracy layer of society. I'm just not sure how far they will get. The person who spoke up in that meeting was exceptionally brave. Maybe you can't always count on having one of those guys around. (I, for one, have no such inclination to speak up in front of a crowd.) Then again, for all I know there is some kind of punishment in store for him, explicit or perhaps subtle. And certainly there are companies that have a more woke monoculture, where such "outbursts" would not be tolerated. Apropos of my previous post, maybe this is a case of "They would be causing havoc if they could, but they are being held in check by forces outside of their control." Maybe all it takes is some respectfully worded pushback. It certainly changed the tone of the meeting from "We're all on the same page here" to "Some of us aren't buying into this paradigm." 

_______________________________________

We'll see how it plays out, but there seems to be a kind of "diversity and inclusion" power play happening at the professional organization that I'm a member of, see here. I got a long, strongly-worded e-mail with more details on Friday, which was responding to an earlier e-mail from the CAS (which had barely registered with me). I count it as another example of pushback, not so gentle in this case. 

On the example of doing an Indian accent for an Indian colleague. I have no idea what actually happened, so I'll take the lady's word for it. But I can't let this go without saying something. I had a lot of Indian friends and teachers in grad school. I would sometimes do their voices, as would everyone. No, I was not doing a generic Indian accent. I was doing the distinct voice of my friends and colleagues, trying to accurately capture their actual mannerisms and voices. Just as I would often do for my white colleagues, just as we all did all the time. (Guys like to mock each other. Sometimes this took the form of impersonating voices and accentuating the distinct features of their speech. "Matt Damon.") One Indian friend had a very slow voice, and if I were "doing" him you might think I was doing the voice of a native American rather than an Asian Indian. There was an Indian girl who had a kind of breathy, melodic voice. Another friend of mine was always doing her voice, and there was nothing obviously Indian about it. But there were some colleagues whose voices were decidedly more Indian. If I were to "do" one of them in isolation, it might sound racist. But what if I were reciting a conversation between these people, afterwards to an audience who wasn't present? Would I do an accurate impression of everyone's voice, but suddenly stop when I get to the guy with an Indian-sounding accent? Would I have to suddenly drop the voice acting and make him sound like a white guy? If the person who did the voice heard this HR lady talking about him, I wonder if he'd respond with something like, "I wasn't doing a generic Indian accent, I was 'doing' Samir! What, do they all sound the same to you or something? Seems all the HR training has worked on you. It's actually made you incapable of discriminating."

I also have to recall an early season of The Ultimate Fighter in which one of the contestants was a deaf guy (Matt Hamill). The other guys were doing his voice. At one point, one of them turns to the camera and says, "It might sound mean, like we're making fun of deaf people. But we're really just 'doing' Matt." I think it would be more offensive if the guys left Matt out of this male ritual of gentle teasing and hazing. Like, if they didn't want to seem mean in front of the cameras, so instead they just left the deaf guy out of the game. Still, I see how this looks to an outsider who lacks the full context or can't imagine the counterfactual. If you picked this out and showed a bunch of guys mocking a deaf guy's voice in isolation, I understand that this would look bad. 

Saturday, August 7, 2021

If Left Unchecked…

 There is a thread in current political thought that I'm trying to put my finger on. I think I might have figured it out. It's a question of whether you're more concerned by the crazies on left or the right. Who is likely to do more harm? Who is a bigger threat right now? This conversation with Bret Weinstein, Jesse Singal, and James Lindsay embodies the conflict I'm talking about. 

Some look at the Trump phenomenon and the crazies who stormed the capital building as a new kind of existential threat. Here's a different kind of populist politician with a base of hardcore fans ("fans" in the true sense of "fanatics"). Certainly there was an intent to overturn the outcome of an election. There was a shocking public display of this intent on January 6th. But I would argue that there was never any serious chance that this changed the outcome. I just can't see that scenario. Even if the rioters really dug themselves in and it was hard to physically displace them. "There are a bunch of crazy people occupying the capital building. Darn, I guess Biden can't be president." Sure, if the angry mob kept growing and seizing government property and nothing ever checked their advance, I guess I could imagine a series of events where a right-wing coup takes over the government. But the "left unchecked" part is doing some heavy lifting here. The police presence was initially inadequate, but it was enough to keep them in check. As I read it, ultimately Trump's pro-cop sensibilities led him to call off the riot and tell his fanatics to go home. (I recall him being remorseful about seeing police officers being assaulted, but maybe I'm misremembering some flash-in-the-pan news report from that day.)

Likewise, some observers look at the rioting in major cities and see an encroaching end to civilization. It wasn't just peaceful protesting, and the non-peaceful protests were not strictly in service of a noble policy change. Some of the protesters wanted to completely abolish existing institutions, and they weren't shy about saying so. A mob of rioters in Portland were advancing nightly on a federal courthouse. The cop-free CHOP/CHAZ zone in Seattle regressed into warlord-ism. Angry mobs were wantonly destroying property, and pseudo-intellectual defenders were apologizing for them. Once again there was an angry mob with the intent of overturning our institutions and seizing power. And certainly there were public displays of this intent, which were sometimes quite frightening. But how far they actually get depends very much on whether these shows of force are checked or unchecked. Would the police cease to form a protective perimeter under the assaulted courthouse, allowing the rioters to literally occupy it and possibly burn it down? Would moderate cities, whose leadership isn't completely captured by woke insanity, start tolerating similar behavior? I think the mobs of wokesters, antifa, and left anarchists (sometimes distinct groups who have different agendas) would come into contact with an opposing force eventually, even if they are successful in the short term. For those people who aren't particularly concerned about the riots, I wonder if this is what they're thinking. (Obviously some people are actively sympathizing with the mobs. I'm not addressing them here; I'm discussing "Overton Window" moderates who disagree with the rioting but can't seem to find the voice to condemn it.)

Back to the conversation with Weinstein, Singal, and Lindsay. It starts with them saying who they voted for and explaining themselves. Weinstein cast a write-in vote for Gabby Gifford, which I think is a fine choice. Lindsay explains how he begrudgingly voted for Trump because he was the only force in public life standing up to Woke madness. Singal voted for Biden. He explains that he's also concerned about how Wokeness is getting into our culture, but the threat of Trumpism is just orders of magnitude more dangerous. (He compares Lindsay's vote to worrying about a mosquito bite while ignoring a bullet wound, if I heard him correctly.) 

Singal has been assaulted by the woke mob, so he has authority to speak on this topic. He wrote an excellent piece on the research on "implicit bias" tests, suggesting that they aren't really measuring what they're supposed to be measuring. In fact, given that the test is not consistent from day to day for a given person, it's not really clear that they're measuring anything at all. (Great podcast with Singal here on Rationally Speaking, where he discusses the article and the backlash it received. He even concedes that implicit bias is probably a real thing, it's just that these tests don't actually measure it.) Apparently this was some kind of sacred cow among "anti-racists." Implicit bias tests were this crystal ball with which to divine hidden racism. By smashing it, he deprived the woke movement of one of their tools. He was mobbed on Twitter for it, so I'll give him some deference when he says these cultural threads are not as threatening as Lindsay and Weinstein make them out to be.

Singal asks, I think quite fairly, what concrete policies have the woke mob achieved? Abolition of police is not a mainstream idea, and any time it's come into confrontation with democracy it gets voted down. And the ultimate selection of Biden (the "moderate" candidate) rather than a Sanders or a Warren signals where the American polity is. Singal is basically pointing out that there is a check in place on the insane woke culture that is making so much noise on Twitter and in universities. The check is sufficient that wokeness is confined to those habitats and is not making inroads into the broader world. 

Weinstein and Lindsay both answer with some version of "That's not how this works." Weinstein points out that he did, in fact, have the police abolished on him while as an angry mob of students were searching car to car for him. (See ~14:00 in this video.) This was in 2017, well before anyone had heard about George Floyd and before "abolish the police" became a popular hashtag. Wokeness doesn't require democratic approval or official ratification. A committed core of ideological radicals can seize control of institutions, as they did at Evergreen State College. Weinstein's encounter with the Woke mob was more physically dangerous than Singal's, and it turned out to be a career ender for him. I don't know if that means Weinstein has special insight into the danger they pose, or if maybe it means he's deranged by a hazard that's not really likely to manifest itself elsewhere. He definitely takes the view that this isn't just a few crazies on college campuses. If this ideology is gaining a foothold on campus, it'll soon be elsewhere in the world as those radials graduate and insinuate themselves into institutions in the broader society. 

Lindsay has extensively read from the works of the ideological progenitors of wokeness: the post modernists and the critical race theorists. His book Cynical Theories, coauthored with Helen Pluckrose, is a great exposition on these threads of academic thought. (He, Pluckrose, and Peter Baghosian understand this ideology very well apparently. The three of them managed to publish several hoax papers in "serious" academic journals, without the editor realizing they were being played.) I've listened to his podcast New Discourses for a while, and I understand why he is concerned. Some of the precursors to today's woke movement talk explicitly about what tactics they will use to take over institutions, and Lindsay points to parallels in today's world. They make no bones about their plans to recast language so certain thoughts become inexpressible. They are explicit about their desire to overthrow norms of open discourse so as to favor particular kinds of discourse. Maybe in today's world that's manifesting as once-a-year hour-long human resources-mandated "sensitivity" sessions, with the occasional innocent victim getting railroaded for a misinterpreted "insensitive" comment. Obnoxious, but not exactly civilization-shattering. But if it's left unchecked? If it grows without bounds and nobody stands up to it? I can imagine a scenario where this hyper-racialized ideology just keeps gaining ground and a critical mass of resistance just never rises up to oppose it. I can also imagine a scenario where nobody cares, people laugh off the stupid HR mandated struggle sessions, and everyone just ignores the obnoxious voices on Twitter because almost nobody is paying attention to it anyway. 

I don't think Lindsay is wrong to be worried, but Singal's question about "Where is this actually happening?" is a fair one. Lindsay's answer (if I heard him right) is that woke activists are inserting themselves into the bureaucracy layer of society. Biden himself might not be speaking the woke lingo, but maybe he appoints someone to a cabinet position, who then appoints some staff members, who then manage to affect policy and do some damage. They can also insert themselves in HR departments of private companies, or perhaps they can run sensitivity seminars where they hector and castigate their audience for deviating from the ideology. Weinstein's telling of the downfall of Evergreen is that new management came in and tried to ally itself with the woke movement. Instead of acquiring a useful ally, the college's administration ended up being taken over by it. If I'm reading him right, he's saying the Democratic party is going down the same path. Biden is winking and nodding at them without explicitly endorsing their policy platform. (Listen to their conversation for examples; Weinstein and Lindsay offer a few. Singal is having none of it. Indeed, a few of the examples Lindsay gives are weak. You could say he's reaching. Still I think Singal is too dismissive of the dangers.)

Anti-liberal ideologies are indeed quite dangerous if left unchecked. A right-wing populism that actually succeeds in overthrowing the liberal order would be a disaster, and shame on anyone who takes part in it. An all-consuming left-wing ideology that succeeds in overthrowing the liberal order would be an equal disaster. I see both as having made successful inroads. (Electing a president being a significant example. Achieving hegemony in the discourse being another.) I'd be curious to know if I'm just way off base here. Obviously someone who's an active Trump supporter would say I'm wrong, because Trumpism is actually good for America. I'm not interested in hearing from you, sorry. Obviously someone who's pro-woke would scold me for criticizing wokeness, because obviously it's actually a good thing and comparing it to a bad thing and asking which is worse is a non sequitur. If that's you, I'm not particularly interested in hearing from you, either. I'm more curious about the people who fall in the space between Lindsay, Singal, and Weinstein, who are concerned about both wokeness and Trumpism but see one as overwhelmingly more threatening. Is it that "X isn't a threat because it's being held in check"? Or is it that "X isn't a threat, because it wouldn't be a bad thing if X got its way?" I feel like people are so deranged by one kind of threat that they are willing to ally themselves with anything that opposes it. (One might call such derangement a "syndrome."). And it's leading some of them to keep some "interesting" company. I wonder if this is just a function of the news sources people are consuming?