Abstract


Excerpted From: Wendy K. Tam Cho and Bruce E. Cain, Deploying Trustworthy AI in the Courtroom: Lessons from Examining Algorithm Bias in Redistricting AI, 2023 University of Chicago Legal Forum 87 (2023) (62 Footnotes) (Full Document)

Cho-Cain.jpegThe inexorable advance of AI has achieved broad societal reach, including into the law. Over the last decade, the courts have reviewed novel forms of evidence enabled by new technological advances. This trend will likely not slow or reverse course in the future. Given the judiciary's general lack of technical expertise, the potential for technological advances to create unintended difficulties for judicial decision-making seems likely. Thus, there needs to be a serious discussion about how the legal system should adapt to meet these inevitable changes.

Although the courtroom is in many ways a unique setting it faces similar challenges to those that other societal sectors are grappling with. Technological developments come rapidly and require nimble adaptation by society. We should consider the ramifications of, say, the metaverse and deep fakes before they become menacing rather than after. It would be naïve to believe that the courtroom will be spared from the ramifications of rapidly rising technological sophistication. Minimizing deleterious effects on the justice system requires learning from what we know so far and being proactive about experimenting with solutions.

In this Article, we focus on the legal landscape surrounding electoral redistricting. We begin by explaining the general concept of algorithm bias. We then describe how this conception of algorithm bias emerges in redistricting litigation. Finally, we propose avenues for mitigating the deleterious impact of algorithm bias in this legal arena and comment on general lessons learned from this context for deploying trustworthy AI in the courtroom.

[. . .]

All of which leads us back to the core redistricting issues; how should we value various redistricting goals and decide the tradeoff between them? These types of substantive judgments do not emanate from technology, but rather from human deliberation and consensus building. The Supreme Court may also weigh in as they did with the one person, one vote doctrine that not only set out the goal of equally weighted votes, but also elevated population equality to the highest priority. Congress may also play a role as it did when it passed the Voting Rights Act and made minority representation a priority over other goals. State legislatures might also contribute by passing laws requiring respect for local jurisdiction lines and communities of interest. None of these entities have yet been able to agree on or articular a clear test of partisan fairness. Neither do we, but that answer does not lie with machines and algorithms.


Departments of Political Science, Statistics, Mathematics, Computer Science, and Asian American Studies, the College of Law, and the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign.

Department of Political Science and Bill Lane Center for the American West, Stanford University.