Abstract
Excerpted From: Niria Rodriguez-Davila, Machine Learned Misogyny: Gender Bias in AI, 35 DePaul Journal of Art, Technology & Intellectual Property Law 65 (Spring, 2025) (218 Footnotes) (Full Document)
In 1950, “when computers were new and unimpressive by today's standards,” Alan Turing first suggested the “possibility of intelligent machines.” Turing proposed the “Imitation Game,” by which a human judge would type questions to a computer and a human; the computer won if the judge could not identify which answers were machine generated. Turing explored the idea only as a mathematical possibility. But only five years later, the Logic Theorist emerged, “a program designed to mimic the problem solving skills of a human.” A year later, the program was presented at a conference where the term artificial intelligence (“AI”) was first coined. There begins the story of AI, these key moments set off the next seven decades of development. Thus, began our descent into the “Code-Dependent” “Algorithm Age.” An age where algorithms are “everywhere, in everything”, “silent workhorses aligning datasets and systematizing the world.”
For the past decade, machine learning (“ML”) is the way “most parts of AI are done,” leading people to sometimes use the terms interchangeably. However, ML is actually a subfield of AI. ML is about “giv[ing] computers the ability to learn without explicitly being programmed.” A ML algorithm is created by feeding a “model” historical data; the model is then able to take this data to predict outcomes. A model is able to predict future outcomes by “assess[ing] historical data [and] discover[ing] patterns.” There is no shortage of data to feed these ML models (“MLM”). “Online and off, nearly every life choice you've made ... has been logged, categorized, and then entered in a spreadsheet to be sold off.” This massive data collection and the resulting algorithmic predictions fuel personalized advertising, which may seem innocuous enough. Although sometimes these advertisements may come off too personalized, intrusive even. Algorithms collect so much of our data that it seems they know us better than we know ourselves, or maybe at least better than our loved ones do. However, the larger issue is that algorithms are doing much more than pushing coupons. MLMs are making decisions all around us. “From what we choose to read to who we choose to date, algorithms are increasingly playing a huge role.” “Algorithms are making hugely consequential decisions in our society on everything from medicine to transportation to welfare benefits to criminal justice and beyond.”
The rise of AI has essentially put us in “a new industrial revolution.” The implementation of algorithmic decision-making was “marketed as fair and objective,” free of human prejudice, “just machines processing cold numbers.” Yet, algorithms are still “based on choices made by fallible human beings .... many of these models encode[] human prejudice, misunderstanding, and bias into the software systems that increasingly manage [] our lives.”
Encoded bias includes misogyny. Misogyny is defined as “hatred of, aversion to, or prejudice against women.” The Berkley Haas Center for Equity, Gender, and Leadership analyzed 133 systems from 1988 to 2021 and found “44.2 percent (59 systems) demonstrate gender bias, with 25.7 percent (34 systems) exhibiting both gender and racial bias.” When we think of misogyny, we often think of “outright misogyny,” everything from “catcalling to gender-based violence.” Our new hyper technological landscape does perpetuate these kinds of harms. However, the focus here is “the systemic role [misogyny] plays in our world.” More specifically, this comment explores how AI is incorporating and amplifying harmful historical attitudes towards women. It is important to note that misogyny never operates alone, and this is true in AI. AI upholds racism which creates particularized problems for Black and Brown women. AI also upholds the gender binary which creates unique issues for transgender women. Unfortunately, as will be explain further in Part II, the gender data gap means that data on women in marginalized groups “practically nonexistent.” However, wherever possible, this comment seeks to take an intersectional approach.
Part II will provide background on how MLMs become biased. Part III will discuss areas where the use of biased AI systems has resulted in harm. Part IV will discuss the current regulatory landscape and make suggestions for where legislative efforts should be focused. This comment will conclude in Part V with a discussion of the road ahead.
[. . .]
It may seem like a large undertaking to regulate the rapidly developing AI industry. However, these efforts can be achieved at least in part by extending existing laws. In fact, we should be expanding current laws to cover AI. Alternatively, even without a specific AI provision, we should enforce the discriminatory consequences of AI within existing frameworks. For example, although ultimately no violation was found, the Apple Controversy was investigated as a violation of the ECOA. Whether through direct human bias or encoded algorithmic bias, the results of discrimination are the same and should be enforced as such. As Google's President of Global Affairs put it, “[w]e don't need duplicative laws or reinvented wheels.” “Until recently, software developers have not paid enough attention to ensuring their algorithms operate within existing laws.” Although it may be difficult to make progress in the current climate, we cannot allow this willful blindness to continue especially when the harm to women is so great.
Niria Rodriguez-Davila is a 2025 DePaul University College of Law J.D. Candidate.