Abstract

Excerpted From: Teresa Scassa, A Reasonable Apprehension of AI Bias: Lessons from R. v. R.D.S., 69 McGill Law Journal 467 (October, 2024) (85 Footnotes) (Full Document)

 

TeresaScassaThe risk of discriminatory bias is a central concern when it comes to the use of artificial intelligence (AI) technologies for automated decision-making (ADM). Identification and mitigation of such bias is an important preoccupation of emerging laws and policies. Currently, public-sector automated decision systems (ADS) operate in lower risk contexts than the criminal justice system, although the use of such tools is evolving. In the private sector, ADS are already deployed in higher impact contexts such as the selection of tenants for apartments, the determination of creditworthiness, and in hiring and performance evaluation. Generative AI systems such as ChatGPT can also be used to support ADM in different ways, including in the preparation of briefing materials, translation, and drafting decisions.

This paper explores discriminatory bias in ADM using the series of court decisions in R. v. R.D.S. (RDS) (culminating in a 1997 decision of the Supreme Court of Canada) to illustrate some of the potential frailties in approaches to this issue. It is important to note at the outset that RDS addressed the issue of 'reasonable apprehension of bias', which differs significantly from the human-rights-based concept of discriminatory bias. Nevertheless, the case is important because in it, the concept of impartiality that underlies the doctrine of reasonable apprehension of bias becomes intertwined with the notion of discriminatory bias in complex and interesting ways. In RDS, the alleged apprehension of bias is tied to a Black woman judge's perception of the credibility of two witnesses -- one White and one Black. Credibility -- something typically stripped from the targets of discrimination -- is left to be determined by a decision-maker who is in turn challenged for bringing a racialized (i.e., non-White) perspective to the task. This complicated and messy case challenges risk-mitigation approaches to AI bias in which we identify risks, develop strategies to mitigate them, and monitor outcomes. Risk-based approaches tend to assume that there is a social consensus about what bias is and how it is manifested. They also tend to lead us towards technological solutions. RDS teaches us that understanding, identifying, and addressing bias may be much messier.

This paper begins with a brief overview of discriminatory bias in AI systems. Part 2 provides a summary of the dispute at the heart of RDS. Part 3 teases out four themes emanating from RDS that are relevant to the AI context: (1) the tension between facts and opinion, (2) transparency and explainability, (3) the issue of biased input and biased output and (4) the role of the human-in-the-loop. The paper concludes by considering that a statistical and technological approach to identifying and mitigating bias in ADM may unduly narrow the focus, and argues for a more robust approach to addressing bias in AI.

 

[. . .]

 

Risk regulation, the dominant paradigm for AI governance, is premised on the existence of risks that must be mitigated. Such risks include harmful bias and discrimination, which will be disproportionately borne by those who have experienced generations of discrimination, compounding existing inequality. Furthermore, although bias and discrimination are often presented as issues of data quality or flawed assumptions in algorithms, RDS teaches us that the problems are more complex than merely biased or incomplete data. There may be fundamental differences as to how we are prepared to understand or interpret the data, how we build the systems to process the data, how we adopt, implement and oversee systems, and who is engaged in these processes. While the NIST AI RMF and the EU-US AI definitions attempt to capture this broader understanding of how bias and discrimination may be manifested in ADM, this approach is less evident in Canada. In all contexts, there is a real risk that risk-mitigation measures will be reduced to automated assessments of outputs and enhanced data curation. Even though these are important activities, they are not sufficient. Just as the problems are not solely in the machines, neither are the solutions.

More fulsome approaches to bias and discrimination in AI are not limited to issues of data quality or coded assumptions; they go so far as to include the very choices that are made about how to deploy AI and in what contexts. RDS reminds us that very experienced, highly-trained and well-paid and respected members of society can have profoundly different opinions about the constitution of facts and the existence of bias. It is a reminder that bias and discrimination in AI systems are fundamentally human issues, and artificial intelligence is still a fundamentally human technology from its inception to its deployment. This reasoning suggests that we have much work to do--and much more challenging and complex work at that--in order to address bias and discrimination in AI.


Canada Research Chair in Information Law and Policy, University of Ottawa.