Sunday, January 27, 2019

The Clinician's Error

Here is a great piece by Maia Szalavitz, titled Most People with Addiction Simply Grow Out of It. She really puts here finger on a common logical fallacy, one that's particularly common in drug policy discussions. She dubs it The Clinician's Error. It's the tendency to think that the cases you see are typical, when in fact they are often extremely biased samples of humanity. The process that generates these examples is not dipping it's ladle into an even mix of human experiences, but is specifically drawn to the worst one-tenth of one-tenth of a percentile of "things that can possibly go wrong." From the article:

So why do so many people still see addiction as hopeless? One reason is a phenomenon known as “the clinician’s error,” which could also be known as the “journalist’s error” because it is so frequently replicated in reporting on drugs. That is, journalists and rehabs tend to see the extremes: Given the expensive and often harsh nature of treatment, if you can quit on your own you probably will. And it will be hard for journalists or treatment providers to find you.

Treatment providers get a similarly skewed view of addicts: The people who keep coming back aren’t typical—they’re simply the ones who need the most help. Basing your concept of addiction only on people who chronically relapse creates an overly pessimistic picture.
 This is one of many reasons why I prefer to see addiction as a learning or developmental disorder, rather than taking the classical disease view. If addiction really were a primary, chronic, progressive disease, natural recovery rates would not be so high and addiction wouldn’t have such a pronounced peak prevalence in young people.
I see so many comments on drug policy pieces that start with "as a doctor..." or "as a police officer...", and then the "expert" begins to tell it like it is (in their own minds, anyway). I've personally gotten this kind of response from people in law enforcement or medicine. It's always very condescending. As in, "I'm the experienced and world-weary practitioner and you're simply a book-worm with no practical experience in this matter." Still, these people reach the wrong conclusion by committing a pretty basic error. They ignore sample bias. They ignore the fact that, almost by definition, their professions bring them face-to-face with the worst-case scenarios. They get called in when something bad happens. (I'm also tempted to point out that many doctors and many people in law enforcement have reached the opposite conclusion, so appealing to authority does no good here.)

I want to respond by saying, "Look, pal. Driving is incredibly dangerous. I'm an actuary. I look at auto claims all day. It's all we worry about! It's the only thing that causes my company any trouble. These accidents happen all the time. Here is a long list of fresh examples..." This would be incredibly silly, and a serious person could tell me that I'm just fixating on the rare accidents and not the thousands of safe car trips that occur for each auto accident. In fact, we actuaries have a pretty good tradition of measuring everything that happens, not just the times that something goes wrong. My database contains not just the accidents, but also the automobiles that never file a claim. And we explicitly calculate the risk in a quantitative way. (For example, "The frequency of an auto claim causing bodily injury is 0.5% per car-year.") I'm not driven to make the clinician's error, because I know about all the times that nothing bad happened. (I wrote a post about this a while ago.)

No comments:

Post a Comment