One question that comes up time and time again is whether discrimination is real. Are differences in outcomes for Group A and Group B due to factors that are intrinsic to people in each group, such as requiring firefighters to carry a 250-pound dummy up many flights of stairs? Or are differences due to implicit bias—or flat-out discrimination—such as finding that orchestras are more likely to hire women when they cannot see the candidate during the hiring performance? Then there’s the sticky question of whether discrimination is rational. If most women don’t have the strength to carry a heavy person out of a burning building, is it reasonable to assume that women cannot be firefighters?
In a new working paper, Princeton University economists David Arnold and Will Dobbie and Harvard Law School’s Crystal S. Yang seek to tease out prejudice from reasonableness by looking at bail judges. They use administrative court data from Miami and Philadelphia and find evidence of significant racial bias. They argue, however, that the evidence they find isn’t prejudice so much as it is “racially biased prediction errors.” What’s the difference? Their research leads them to conclude that some of the judges are making faulty decisions due to a systemic lack of accurate information based on stereotypes that drive bail decisions rather than on obvious prejudice.
The three researchers were able to tease out the difference because defendants are quasi-randomly assigned to bail judges, which allows the scholars to see whether judges accurately predict whether released defendants will engage in pre-trial misconduct. The judges have to make quick decisions about setting bail—they often will only see the defendant during a short video call—and have to balance the need to reduce jail costs and allow defendants to keep their jobs while awaiting trial with the probability that the defendants will engage in misconduct before the trial.
If a judge is not racially biased, then there should be no difference in the probability of pre-trial misconduct between black and white defendants, all else being equal. If white defendants have a higher rate of misconduct, that means judges overestimate the relative likelihood of black defendants to reoffend. The researchers find that bail judges are indeed racially biased. Compared with black defendants, whites are 18 percentage points more likely to be rearrested prior to disposition. This means that judges are systemically underestimating white defendants’—and overestimating black defendants’—probability of engaging in misconduct.
Yet the authors conclude that their study is consistent with the conclusion that bail judges are not making racially biased decisions due to a direct prejudice against black defendants because they are making guesses as to whether the accused is a danger to society while awaiting trial based on “racially biased prediction errors.” In other words, they incorrectly think that black defendants, on the margin, are more likely to reoffend.
They make this conclusion because an important factor driving racial bias turns out to be the individual judge’s experience setting bail terms. They find much more racial bias among inexperienced and part-time judges. Those judges hear far fewer cases and don’t have extensive experience to guide them. It’s possible that after enough experience, the racial stereotypes start to hold less salience for more experienced judges. This is underscored by the finding that when looking at high-risk defendants, judges are more likely to make racially biased decisions.
This has important implications. If stereotypes lead judges to snap judgements that are racially biased, then they almost certainly will have an impact in other important areas where such arbitrary decisions could be made. A better understanding of how these stereotypes can be broken down and eliminated will help create policies that lessen such racially inequitable outcomes.