Imagine you have a population of criminals and you score
them for recidivism risk. Let’s make the example race agnostic: you’re scoring
a sample comprised of one race, or the data comes from an extremely homogeneous
city or region, or it's from a territory in which the races don’t differ very
much in their proclivity to commit crime or recidivate after being paroled. Anyway, we take this
racially homogeneous sample and “score” it with a predictive model, such that everyone
is assigned a “probability to recidivate.” This is some number ranging from 0
to 100%, not necessarily distributed uniformly across that range. And suppose the
predictive modeling and scoring is fairly accurate: Of those people who are
assigned a 60% chance of recidivism, 60% of them subsequently recidivate.
Likewise for people assigned a 1%, 10%, 80%, 99%, etc. Whoever built the model did a pretty good job in this respect. Next, we’re going to semi-randomly
assign them a class, “H” or “L”, such that high-probability people are more
likely to get an H and low-probability people are more likely to get an L. The categories
are not pure; there are L’s who have a 90% recidivism rate and H’s who have a
10% recidivism rate. There are L’s who do in fact recidivate and H’s who do
not. But the categories do reflect actual relative risks; overall L’s are less likely
to recidivate than H’s.
This population, which was fairly modeled and which we arbitrarily
assigned to classes, will exhibit the same “false positives/false negatives”
problem as is spelled out in Propublica's "Machine Bias" piece. The H’s, even though their
modeled probability of recidivism was accurate (and thus “fair” on this
criterion), will show a high rate of false positives and a low rate of false
negatives. The L’s will show the converse; high false negatives and low false
positives. It looks like the model is “going easy on” L’s, letting too many guilty ones off the hook. It's also “too hard on“ H’s, falsely labeling many of them as likely to recidivate when they ultimately don't.
But this is purely statistical result of H’s having a higher real propensity to recidivate. It’s not
the result of systemic racism. It can't be, because the H and L categories were only assigned
after we modeled.
Of course none of this proves that there isn't systemic racism in law enforcement, and I do not want to make that claim. But it does show that the "bias" Propublica found is what we'd expect to see even when no bias exists. The metrics Propublica used to berate the recidivism prediction model will impugn even a fair model. As someone else elegantly put it, Propublica exploited an impossibility theorem to write that piece. The false positive rate, defined as false positives divided by false positives plus true negatives, will always be high for the higher-risk group for an unbiased model.
I wrote two posts about the original Propublica piece. A recent link-roundup at Slate Star Codex, linking to this piece by authors Chris Stucchio and Lisa Mahapatra, revived my interest. Note the discussion in the comments between Ilya Shpister and Chris Stucchio. The discussion was somewhat revived in the comments of this open thread. Stucchio's comments to Shpister and other critics are incredibly useful in understanding this debate. I recommend reading the relevant pieces of those threads.
If I'm asked to adjudicate this debate, I think Stucchio is basically right in his original piece written with Mahapatra; he's also right in his answers to commenters in the SSC threads. Shpister is evasive and at times incredibly rude to other commenters ("Just dropping here to say this discussion is above your pay grade..."), even as Stucchio tries to get to the heart of their disagreement. I definitely agree with Stucchio that some journalists (like the authors of the Propublica piece) are being deliberately misleading, while perhaps others are merely being reckless with statistics. But don't take my word for it; you will learn more by reading through the actual threads.
In this comment Stucchio suggests a very useful exercise. He creates a sample (written in Python) where one race is given more traffic tickets due to unfair hassling. Being an arbitrary bias, this should have nothing to do with future propensity to commit crime. In fact a fairly simple regression should learn this bias and correct for it. I think everyone who is saying, "Well, what if the algorithm is unfair because of this bias..." should come up with a concrete example and show in exactly what way predictive modeling is unfair in this world. This idea can be expanded upon to answer specific criticisms or specific models of how bias creeps into the data, the modeling, or the ultimate decision-making that is based on that data and modeling.
If I'm asked to adjudicate this debate, I think Stucchio is basically right in his original piece written with Mahapatra; he's also right in his answers to commenters in the SSC threads. Shpister is evasive and at times incredibly rude to other commenters ("Just dropping here to say this discussion is above your pay grade..."), even as Stucchio tries to get to the heart of their disagreement. I definitely agree with Stucchio that some journalists (like the authors of the Propublica piece) are being deliberately misleading, while perhaps others are merely being reckless with statistics. But don't take my word for it; you will learn more by reading through the actual threads.
In this comment Stucchio suggests a very useful exercise. He creates a sample (written in Python) where one race is given more traffic tickets due to unfair hassling. Being an arbitrary bias, this should have nothing to do with future propensity to commit crime. In fact a fairly simple regression should learn this bias and correct for it. I think everyone who is saying, "Well, what if the algorithm is unfair because of this bias..." should come up with a concrete example and show in exactly what way predictive modeling is unfair in this world. This idea can be expanded upon to answer specific criticisms or specific models of how bias creeps into the data, the modeling, or the ultimate decision-making that is based on that data and modeling.
No comments:
Post a Comment