Abstract

Excerpted From: Gideon Christian, The New Jim Crow: Unmasking Racial Bias in AI Facial Recognition Technology within the Canadian Immigration System, 69 McGill Law Journal 441 (October, 2024) (94 Footnotes) (Full Document)

 

GideonChristianFacial recognition technology (FRT) is an artificial intelligence (AI)-based biometric technology that utilizes computer vision to analyze facial images and identify individuals by their unique facial features. This sophisticated AI technology uses advanced computer algorithms to generate a biometric template from a facial image. The biometric template contains unique facial characteristics represented by dots, which can be used to match identical or similar images in a database for identification purposes. The biometric template is often likened to a unique facial signature for each individual.

A significant rise in the deployment of AI-based FRT has occurred in recent years across the public and private sectors of Canadian society. Within the public sector, its application encompasses law enforcement in criminal and immigration contexts, among many others. In the private sector, it has been used for tasks such as exam proctoring in educational settings, fraud prevention in the retail industry, unlocking mobile devices, sorting and tagging of digital photos, and more. The widespread use of AI facial recognition in both the public and private sectors has generated concerns regarding its potential to perpetuate and reflect historical racial biases and injustices. The emergence of terms like “the new Jim Crow” and “the new Jim Code” draws a parallel between the racial inequalities of the post-US Civil War Jim Crow era and the racial biases present in modern AI technologies. These comparisons underscore the need for a critical examination of how AI technologies, including FRT, might replicate or exacerbate systemic racial inequities and injustices of the past.

This research paper seeks to examine critical issues arising from the adoption and use of FRT by the public sector, particularly within the framework of immigration enforcement in the Canadian immigration system. It delves into recent Federal Court of Canada litigation relating to the use of the technology in refugee revocation proceedings by agencies of the Canadian government. By delving into these legal cases, the paper will explore the implications of FRT on the fairness and integrity of immigration processes, highlighting the broader ethical and legal issues associated with its use in administrative processes.

The paper begins with a concise overview of the Canadian immigration system and the administrative law principles applicable to its decision-making process. This is followed by an examination of the history of integrating AI technologies into the immigration process more broadly. Focusing specifically on AI-based FRT, the paper will then explore the issues of racial bias associated with its use and discuss why addressing these issues is crucial for ensuring fairness in the Canadian immigration process. This discussion will lead to a critical analysis of Federal Court litigation relating to the use of FRT in refugee status revocation, further spotlighting the evidence of racial bias in the technology's deployment within the immigration system.

The paper will then proceed to develop the parallels between racial bias evident in contemporary AI-based FRT (the “new” Jim Crow) and racial bias of the past (the “old” Jim Crow). By focusing on the Canadian immigration context, the paper seeks to uncover the subtle, yet profound ways in which AI-based FRT, despite its purported neutrality and objectivity, can reinforce racial biases of the past. Through a comprehensive analysis of current practices, judicial decisions, and the technology's deployment, this paper aims to contribute to the ongoing dialogue about technology and race. It challenges the assumption that technological advancements are inherently equitable, urging a re-evaluation of how these tools are designed, developed, and deployed, especially in sensitive areas such as refugee status revocation, where the stakes for fairness and equity are particularly high.

 

[. . .]

 

We stand at a pivotal moment in the interplay between technology and race. The parallels drawn between the racial biases embedded in FRT and the systemic racism of the Jim Crow era highlight not just a technological issue but a profound and novel racial justice crisis. As has been seen through various examples and judicial litigation, the deployment of FRT in immigration processes risks perpetuating discriminatory practices that society has long struggled to overcome.

The cases of Barre, AB, Abdulle, and others underscore the need for transparency, accountability, and procedural fairness in the use of FRT by the Canadian immigration and border control authorities. The refusal to disclose the technological underpinnings of decision-making processes not only undermines trust in these institutions but also veils the potential for inherent biases within these systems. While this paper does not advocate for the complete abolition of FRT as suggested by Hamid, there remains a compelling challenge. The challenge lies in not only improving the accuracy of FRT across racial lines but also ensuring its application aligns with the principles of transparency, justice, and equality that form the bedrock of Canadian society. This approach could entail a moratorium on the use of this tool in vital immigration processes, like refugee status revocation, until these principles are enshrined in policy and practice.

This research analysis illustrates the urgent need for a regulatory and ethical framework that addresses the complexities of using AI in sensitive societal domains. Such a framework must prioritize the protection of individual rights, particularly individuals from marginalized communities who are most at risk of being adversely impacted by biases in AI technologies. It calls for a concerted effort among technologists, policymakers, civil society, and affected communities to engage in a dialogue aimed at reimagining the role of AI technologies in society. This dialogue must be rooted in an understanding of historical injustices and a commitment to preventing the reemergence of Jim Crow in new digital forms.

Furthermore, the discussion around FRT and systemic racism extends beyond the boundaries of immigration and touches on broader issues of surveillance, privacy, and social control. The normalization of surveillance technologies under the guise of security and efficiency poses significant questions about the kind of society we want to build and the values we wish to uphold. As Sarah Hamid's abolitionist stance suggests, the uncritical adoption of technologies like FRT risks entrenching carceral logics into the fabric of daily life, reinforcing rather than dismantling structures of oppression.

The research concludes with a call for a technological civil rights movement. Such a movement would advocate for the ethical development and deployment of AI technologies, ensuring they serve to enhance human rights and equality rather than diminish them. It would also push for the right of individuals to challenge the decisions made by or with the assistance of AI technologies, thus upholding the principles of procedural fairness and transparency.

As we move forward, it is imperative that we critically examine the technologies we adopt and their impact on society. The lessons from the past must guide our path forward, ensuring that technological advancements contribute to a more just and equitable world. This pathway requires vigilance, advocacy, and a willingness to challenge the status quo, ensuring that the digital future we build is inclusive, equitable, and reflective of our highest aspirations as a society.


PhD; Associate Professor and University Research Chair (AI and Law), Faculty of Law, University of Calgary. This email address is being protected from spambots. You need JavaScript enabled to view it..