Wednesday, May 3, 2017

Probability, Forecasts, and Reading the News

If I say, “There is a 1% chance of a coup in the Ivory Coast this year,” and then a coup happens, was I wrong? Of course not. I told you it could happen. Perhaps I have a giant portfolio of public forecasts. It could very well be that out of all my “1% chance” forecasts, those things happen 1% of the time. My forecast may have been the best possible forecast that anyone could make with the available information. It’s just that unlikely things happen now and then.

If the weatherman says there is a 5% chance of rain today and it rains, was he wrong? Again, no. Weather forecasts are extremely well calibrated. Of all the times they say “5% chance of rain”, on 5% of those occasions it does indeed rain. It’s not fair to call these “bad forecasts” when, taken as a whole, their “5% chance of rain” forecasts are extremely accurate.
Suppose something terrible happens under my leadership. My factory explodes, or my oil rig spills tons of oil into the gulf, or some over-zealous employees beat up a customer. Was I incautious? Was I wrong about my assessment of these risks? Maybe, maybe not. I probably have some kind of corporate risk management going on, perhaps even a team dedicated to studying and avoiding these kinds of risks. (“Enterprise risk management” is a hot topic right now.) My company probably has various safety protocols designed to avoid explosions/oil spills/customer beatings. It could well be that my company is over-cautious compared to my competitors, and I just had bad luck. Maybe my company’s protocols reduce the chance of a factory explosion to 0.1%, while my reckless competitor is at 1%. I could get unlucky even though I’m far more cautious than my non-exploding competitor.
I’m not trying to say, “Let’s be understanding when a large organization causes something terrible to happen.” Maybe public outrage (even uninformed outrage) is a good motivator to avoid mishaps. But if your goal is to actually understand what went wrong, it’s probably a mistake to assume that the mishap was caused by an identifiable error. Sure, you can look over the exploding oil rig’s safety protocols after-the-fact and find them inadequate. But you can probably do the same thing for oil rigs that didn’t explode. You might find that they all have protocols in place calibrated to reduce the risk of a major spill to, say, 0.01%. If there are a few thousand such rigs, one of them is going to spill every few years. It might not be very informative to say, “Gee, what went wrong with this one?” If we're going to publicly dissect disasters, the purpose should be to gain knowledge and avoid future disasters. We rarely hear commentary such as, "It turns out that the airline's protocols for involuntarily removing a passenger were standard and appropriate" or "Actually the safety measures for the exploding oil rig went far beyond the normal standard and they were simply unlucky." But I strongly suspect this is often the case, particularly in these viral "outrageous news" stories. Freak accidents happen even when everyone is exercising appropriate caution. Our rhetoric needs to adjust to reflect this reality.

No comments:

Post a Comment