Become a Patron! 


 

Abstract

Excerpted From: Melissa Hamilton, The Biased Algorithm: Evidence of Disparate Impact on Hispanics, 56 American Criminal Law Review 1553 (Fall, 2019) (140 Footnotes) (Full Document)

 

MelissaHamiltonAutomated risk assessment is all the rage in the criminal justice system. Proponents view risk assessment as an objective way to reduce mass incarceration without sacrificing public safety. Officials thus are becoming heavily invested in risk assessment tools--with their reliance upon big data and algorithmic processing--to inform decisions on managing offenders according to their risk profiles.

While the rise in algorithmic risk assessment tools has earned praise, a group of over 100 legal organizations, government watch groups, and minority rights associations (including the ACLU and NAACP) recently signed onto “A Shared Statement of Civil Rights Concerns” expressing unease with whether the algorithms are fair. In 2016, the investigative journalist group ProPublica kickstarted a public debate on the topic when it proclaimed that a popular risk tool called COMPAS was biased against Blacks. Prominent news sites highlighted ProPublica's message that this proved yet again an area in which criminal justice consequences were racist. Yet the potential that risk algorithms are unfair to another minority group has received far less attention in the media or amongst risk assessment scholars and statisticians: Hispanics. The general disregard here exists despite the fact that Hispanics represent an important cultural group in the American population with recent estimates revealing that they are the largest minority with almost fifty-eight million members, and that number is rising quickly.

This Article intends to partly remedy this gap in interest by reporting on an empirical study about risk assessment with Hispanics at the center. The study uses a large dataset of pretrial defendants who were scored on a widely-used algorithmic risk assessment tool soon after their arrests. The report proceeds as follows.

Section II briefly reviews the rise in algorithmic risk assessment in criminal justice generally, and then in pretrial contexts more specifically. The discussion summarizes the ProPublica findings regarding the risk tool COMPAS after it analyzed COMPAS scores comparing Blacks and Whites.

Section III discusses further concerns that algorithmic-based risk tools may not be as transparent and neutral as many presume them to be. Insights from behavioral sciences literature suggest that risk tools may not necessarily incorporate factors that are universal or culturally-neutral. Hence, risk tools developed mainly on Whites may not perform as well on heterogeneous minority groups. As a result of these suspicions, experts are calling on third parties to independently audit the accuracy and fairness of risk algorithms.

The study reported in Section IV responds to this invitation. Using the same dataset as ProPublica, we offer a range of statistical measures testing COMPAS' accuracy and comparing outcomes for Hispanics versus non-Hispanics. Such measures address questions about the tool's validity, predictive ability, and the potential for algorithmic unfairness and disparate impact upon Hispanics. Conclusions follow.

[. . .]

Algorithmic risk assessment holds promise in informing decisions that can reduce mass incarceration by releasing more prisoners through risk-based selections that consider public safety. Yet caution is in order where presumptions of transparency, objectivity, and fairness of the algorithmic process may be unwarranted. Calls from those who heed such caution for third party audits of risk tools led to the study presented herein. This study rather uniquely focused on the potential of unfairness for Hispanics.

Using multiple definitions of algorithmic unfairness, results consistently showed that COMPAS, a popular risk tool, is not well calibrated for Hispanics. The statistics presented evidence of differential validity and differential predictive ability based on Hispanic ethnicity. The tool fails to accurately predict actual outcomes in a linear manner and overpredicts risk for Hispanics. Overall, there is cumulative evidence of disparate impact. It appears quite likely that factors extraneous to those scored by the COMPAS risk scales related to cultural differences account for these results. This information should inform officials that greater care should be taken to ensure that proper validation studies are undertaken to confirm that any algorithmic risk tool used is fair for its intended population and subpopulations. 


Senior Lecturer of Law & Criminal Justice, University of Surrey School of Law; J.D., The University of Texas at Austin School of Law; Ph.D, The University of Texas at Austin (criminology/criminal justice).

 


Become a Patron!