Abstract
Excerpted From: Laurie N. Hobart, AI, Bias, and National Security Profiling, 40 Berkeley Technology Law Journal 165 (2025) (274 Footnotes) (Full Document)
We are a nation that profiles. Not all the time, everywhere, but somewhere, every day. Official profiling actions by our states and country include: the genocide of Native Americans, the taking of their lands and children; the genocide and slavery of Africans, their children and descendants; Jim Crow; the FBI/CIA COINTELPRO operation targeting civil rights leaders; broken window and over-policing of Black Americans; the overincarceration of Black Americans since the abolition of slavery; the exclusion of Chinese immigrants; the detention and family separation of Japanese Americans; McCarthyism, including its targeting of Jewish Americans; the tireless pursuit of Mexican and other Latin American immigrants, culminating in the border wall and the separation of families and, again, the taking of children; the over-policing of Latin Americans; the rejection and refoulement of asylum applicants; the rounding up and alleged physical and emotional abuse of Muslim and Arab immigrants after 9/11; the photo, video, and mosque crawling surveillance of Muslim and Arab Americans, and the recruitment of community informants; the travel ban against people from Muslim states; and many more examples. Such a shorthand list could never do justice to the many injustices, to the litany of lives changed. It is a litany of loss, to those individuals profiled and persecuted, and to their local and national communities. Our American record is no exception to the patterns of power, fear, and abuse of the “other” stitched across human history. Have we changed, lessened the pattern over time? Perhaps, perhaps not; but with autocracy on the march around the world and autocratic measures and bigotry advocated for openly by politicians at home, we should not dismiss the possibility of our government furthering the worst practices in our own history.
Now enter AI, from stage right, stage left, and even the orchestra pit. Artificial Intelligence (AI) has overwhelmed the modern scene with an omnipresence that will only deepen. We are increasingly aware of the surveillance effects of the Internet of Things, of the constant tracking we submit to by any number of private companies, government agencies, and rogue internet actors. AI tools are being employed in all fields: medicine and health care, online shopping, social media, and environmental protection, to name a few. Generative AI, such as ChatGPT, is poised to change the practice of many disciplines, including law. It may one day help address problems, such as climate change, or create horrors, such as “dangerous biochemicals.” AI has the potential to bring great benefits to humanity but also great risks. Among those risks, “algorithmic bias” has been a source of much debate and research. AI developers seek to improve models to mitigate bias, but experts agree that some bias is inherent in AI, just as it is inherent in humans.
Much has been written about AI in the criminal justice system, such as predictive policing algorithms and risk assessments used by courts for bail, parole, and even sentencing decisions. The algorithms are often, if not always, trained on historical data that reflect systemic racism in the criminal justice system, and they produce biased, discriminatory results. Scholarly and media attention there is critical. This Article, however, focuses on the potential for bias and for AI profiling by elements of the national security apparatus, where government AI will operate under the further cloak of secrecy and the shield of even more permissive case law. Some of the arguments advanced here, however, apply equally well to routine criminal justice contexts.
AI is an arguably necessary intelligence tool for searching, sorting, and analyzing data and reporting. But it has the potential to exacerbate an existing government tendency to profile in national security investigations based on ethnicity and 'race,’ national origin, and religion, and to reproduce that bias at scale. AI is or likely will be used, for example, in national security criminal investigations; foreign intelligence or counterintelligence operations or investigations; watchlisting practices; border policies and customs investigations, and general monitoring or surveillance programs. The government has a pressing need to use AI for at least some national security and intelligence purposes--that is well argued and documented. But the legal guardrails are shaky, and at some points along the highway, missing altogether. The pace of both national security practice and AI development is very fast, so guardrails are especially needed. AI is inherently risky for civil rights and civil liberties, perhaps insolvably so, and current case law may fail to protect against AI harms.
This Article outlines the ways that AI may exacerbate and reproduce at scale existing bias in national security investigations and surveillance; argues that existing case law is insufficient to protect constitutional rights of equal protection, religious freedom, and due process, and that existing executive policies are likewise inadequate; and suggests litigation strategies for plaintiffs and approaches for courts, Congress, and executive agencies.
Part II will briefly explain from a technical perspective how bias is produced in and reproduced by AI, how bias might alter investigatory and intelligence outcomes, and how AI might expand the net of people under surveillance.
Part III details how existing national security case law is inadequate and even problematic for the protection of civil rights and liberties against harmful uses of AI. It discusses potential barriers to constitutional challenges under Fourth Amendment, Equal Protection, First Amendment, and Due Process precedents; limitations on Bivens claims; and issues arising from classification, state secrets doctrine, and general judicial deference in national security contexts. While explaining how current case law risks civil rights and civil liberties in the face of AI profiling, I also seek to provide litigation strategies for civil rights and civil liberties advocates to proceed under the status quo. Courts likewise might adopt such frameworks when applying precedent to biased AI. I argue that the 1996 Supreme Court case Whren v. United States, which typically precludes litigants from challenging law enforcement profiling under the Fourth Amendment where there is at least a pretextual nondiscriminatory basis for the search or seizure, should not apply to AI-enabled profiling. I also argue that biased AI outcomes should be treated as disparate treatment, rather than simply disparate impact, and therefore actionable under current Equal Protection law.
Part IV discusses three recent executive policies: the Department of Justice “Guidance for Federal Law Enforcement Agencies Regarding the Use of Race, Ethnicity, Gender, National Origin, Religion, Sexual Orientation, Gender Identity, and Disability,” the Intelligence Community's Artificial Intelligence Principles and Ethics Framework, and the October 2023 executive order on AI and its implementing memoranda. (The 2023 executive order was just revoked by President Trump in January 2025, and policy memoranda directed by it are now under review by his administration, as discussed in Section IV.C.) Collectively, these executive policies have both problematic and helpful aspects for AI governance. While the government has demonstrated technical sophistication in its understanding of AI and, at least in some instances, a dedication to testing AI for bias, any progressive regulations and policies are reversible by future administrations, a process that seems to be underway as of this writing.
Part V suggests solutions that civil rights and civil liberties advocates might pursue in legislation. It also provides recommendations for courts and government attorneys seeking to minimize algorithmic discrimination and national security profiling.
[. . .]
Algorithmic bias should be treated under the law as what it is: exceedingly likely (perhaps inevitable) and objectively measurable. Algorithms that discriminate may do so in subtle or obvious ways, but in all cases, knowable, discoverable ways. Biased AI is facially discriminatory in that it separates and bins people along lines of suspect classification. Any choice to use it, especially by such sophisticated actors as the national security and law enforcement agencies, is purposeful discrimination. Government actors therefore have a responsibility to employ every methodology to reduce bias; such interventions should be viewed as anticlassification tools rather than affirmative action. Where AI is used, it must be tagged to specific behaviors that in themselves might constitute part of criminal activity, such as purchasing equipment necessary for a criminal act, rather than to any social identity descriptors or social behaviors aligned or correlated with suspect classifications. If discriminatory bias cannot be eliminated, the government has a constitutional obligation not to use that AI. Such a choice would also be good security policy as it would avoid inaccurate intelligence or investigatory conclusions.
Associate Teaching Professor, Syracuse University College of Law, Institute for Security Policy and Law (SPL); former Assistant General Counsel within the Intelligence Community.