Abstract

Excerpted From: Michael H. LeRoy, Algorithmic Bias in Hiring: Amending Title VII to Prohibit AI Discrimination, 51 Journal of Legislation 261 (April, 2025) (227 Footnotes) (Full Document).


MichaelHLeRoyMy Article proposes legislation to address racial and other biases in the workplace that result from Artificial Intelligence (AI) technologies. AI is already having large effects on work--some good, and others that raise concerns. I focus on AI hiring because these technologies have become commonplace, and they appear to perpetuate biased practices.

My proposal is simple and comprehensive. It also avoids the technically demanding job of fashioning a law that is tailored to rapidly evolving AI technologies. Some laws take a technology approach. The Artificial Intelligence Video Interview Act (AIVIA) in Illinois, and a few state privacy laws, are examples. My approach offers two small but significant amendments to Title VII of the Civil Rights Act of 1964, the employment law that prohibits discrimination based on race, color, religion, sex, and national origin.

First, I propose an amendment to Section 701(c), which defines an employment agency, to include an AI-hiring entity that an employer utilizes. My proposal has the advantage of drawing from extensive Title VII caselaw that holds third parties liable when they assist employers in discriminating in job ads and applicant screening.

Second, further down the list of Title VII definitions, I propose a new definition, following Section 701(o), and designated Section 701(p). My proposal adds a definition of Artificial Intelligence. This type of definitional clarity is necessary to provide meaning for the new definition of an “employment agency.” I derive this proposal from President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023). This definition states in Section 3(b):,

The term 'artificial intelligence’ or 'AI’ has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

Integrating my two proposals, Section 701(c) provides the following:,

The term 'employment agency’ means any person regularly undertaking with or without compensation to procure employees for an employer or to procure for employees opportunities to work for an employer and includes an agent of such a person, including any person who uses 'artificial intelligence’ or 'AI’ within the meaning set forth in 15 U.S.C. 9401(3) (emphasis added to highlight my proposed amendment).

By adding a mere sixteen words to Title VII, the scope of this discrimination law would expand significantly to address discriminatory hiring associated with AI technologies.

B. Premises for This Article,

Before proceeding further, let us consider two premises for this Article: (1) bigotry that is connected to workers is widely disseminated, and (2) this bias cannot be purged from our history nor our everyday discourse.

While explaining the root causes of bias against non-white workers is beyond my expertise, I can summarize their most striking effects. Racism is so deeply rooted in American thought that startling examples appear in the published thinking of respected American leaders. Benjamin Franklin, editor of the Declaration of Independence which stated that “all men are created equal,” published a polemical tract stating that “tawny” and “black” people were better suited for their native continents than America. While his essay avoided a direct reference to slavery, he surely referred to these new people in his midst. Abraham Lincoln, author of the Emancipation Proclamation, also believed that the white and Black races should never intermingle.

Sexism, nativism, and colorism have also been part of the American work experience. Male abolitionists initially supported the Philadelphia Female Anti-Slavery Society; but when the women's group sought credentials at an anti-slavery convention in London, their male counterparts refused to seat them, a phenomenon that led the founding of the equal rights movement for women.

White union members vilified Chinese immigrant workers in the late 1800s. Samuel Gompers, legendary president of the American Federation of Labor, crudely stereotyped these immigrants. A California official declared the Hindus as unassimilable in American society. A senator portrayed Japanese immigrants as “evil.”

The workplace continues to treat workers and applicants with bias. Applicants and employees face discrimination when they are transgender or gay, fail to conform to gender stereotypes, wear dreadlocks, and have foreign names. Women and men are sexually assaulted at work. Immigrant workers are also assaulted out of hate for them.

In short, Americans have never forgotten or unlearned how to discriminate by race, color, and gender. AI has the capacity to learn these biases, ranging from the worst to the most subtle. These premises underscore the need to extend Title VII proscriptions against discrimination to entities that enable employers to discriminate based on AI.

C. Organization of This Article,

In Part II.A, I examine how artificial intelligence affects work. Part II.B examines data science experiments, AI hiring platforms, and recent EEOC concerns about AI discrimination. This Part is titled “Artificial Intelligence and Hiring: The Possibility of Discrimination” because there is not pervasive nor convincing evidence that AI technologies cause hiring discrimination. However, there is enough evidence to justify proactive legislation to protect against the proliferation of AI-enabled hiring discrimination.

In Part III, I explore two different approaches to addressing AI discriminatory hiring practices. Part III.A surveys state privacy laws that relate either to AI video interviewing, or more broadly to the use of biometric information without informed consent from the individual whose data are being captured. Part III.B takes a different approach by exploring Title VII's long history of holding employers, employment agencies, and agents liable for discriminatory job ads and hiring practices. Part III.C delves into lawsuits related to employment and AI technologies. While important, these legal actions have been isolated. This fact does not undermine my thesis which proposes a legislative amendment to Title VII--but it underscores my assertion in Part II.B that AI discrimination in hiring is a possibility and not a phenomenon that is broadly evident.

My Article concludes in Part IV with a restatement of my legislative proposal. I justify my idea by comparing it to the 1991 Civil Rights Act. In two brief but comprehensive passages, this legislation codified Griggs v. Duke Power Co. and its landmark ruling relating to Title VII disparate impact theory. The brevity of this legislation, and its concern for deterring facially neutral employment practices with discriminatory impact, have a close connection to AI hiring today.

 

[. . .]

 

>

I elaborate on my proposed amendments by framing a different context for evaluating them. Having made legislative arguments based on data science research, state privacy laws, and Title VI caselaw, I now ask: How do my proposed amendments compare, if at all, to previous Title VII amendments? These proposals are like legislation in the 1991 Civil Rights Act, which amended Title VII. That law provided a simple but powerful amendment that codified Griggs v. Duke Power. Its main purpose was to ensure that disparate impact theory would remain a viable enforcement tool under Title VII.

Now, more than thirty years after the 1991 amendment, AI hiring discrimination threatens individuals in protected groups under Title VII with discriminatory impacts. While my proposals are not as fundamental conceptually to Title VII enforcement as Griggs--my proposals are more technical, and therefore narrower--the widespread adoption of AI hiring technologies poses significant threats to eroding disparate impact theory because these technologies are often utilized by third party providers that are outside the employment relationship.

Congress enacted the 1991 amendments because of concern that the Supreme Court eviscerated disparate impact theory in a 1989 decision, Wards Cove Packing Co. v. Atonio. Historical context explains the gravity of this concern. Congress enacted the 1964 Civil Rights Act to end racial segregation. Title VII prohibited intentional discrimination in employment. But it also proscribed employment practices “to limit, segregate, or classify its membership or applicants for membership, or to classify or fail or refuse to refer for employment any individual, in any way which would deprive or tend to deprive any individual of employment opportunities” in a way that adversely affects an individual by “race, color, religion, sex, or national origin.”

With this expression, Congress went beyond prohibiting overt discrimination, aiming to eradicate passive practices that perpetuated racial segregation. Griggs. presented a case of vestigial Jim Crow employment practices. Prior to the 1964 Civil Rights Act, Duke Power Co. employed Blacks only in the lowest labor classification at its Dan River plant in Virginia. No Black employee could work in any of the four higher labor classifications.

However, starting on the first day that Title VII went into effect, the company abolished these barriers. In their place, the company implemented educational and aptitude testing requirements that left 13 out of 14 Black employees trapped in the lowest labor classification. While the Supreme Court in Griggs avoided ruling on whether the employer had a subjective intent to discriminate on the basis of race, it ruled that these broad employment standards would have to be reexamined by a lower court to ensure that they did not “operate as 'built-in headwinds' for minority groups and are unrelated to measuring job capability.” More generally, Griggs stated: “The Act proscribes not only overt discrimination but also practices that are fair in form, but discriminatory in operation. The touchstone is business necessity. If an employment practice which operates to exclude Negroes cannot be shown to be related to job performance, the practice is prohibited.”

Griggs remained on solid ground until the Supreme Court ruled in Wards Cove. Nonwhite workers in Alaskan salmon canneries sued this company for employment discrimination, alleging that they faced similar barriers as Black workers in Griggs because they were unable to be considered for ““noncannery positions.”

While the district court compared the firm's racially stratified job classifications as the Supreme Court had done in Griggs--the Wards Cove majority ruled that this comparison was not legally appropriate. Instead, an individual's relevant labor market became the new legal comparator. The Wards Cove dissenting opinions said the nonwhite cannery workers were unable to advance beyond the lowest ranks of the firm's workforce because they could not effectively challenge managerial job qualifications that had a disparate impact without serving a business justification.

The 1991 Civil Rights Act addressed this in two simple ways. In its Findings, Congress stated: “the decision of the Supreme Court in Wards Cove Packing Co. v. Atonio, 490 U.S. 642 (1989), has weakened the scope and effectiveness of Federal civil rights protections.” Next, the law stated in its Purposes: “The purposes of this Act are ... to codify the concepts of 'business necessity’ and 'job related’ enunciated by the Supreme Court in Griggs v. Duke Power Co. and in the other Supreme Court decisions prior to Wards Cove Packing Co. v. Atonio.”

Like Congress did in the 1991 Civil Rights Act, I propose two simple amendments to address the discriminatory effects of AI hiring. First, Section 701(c), which defines an employment agency, should expand to include any AI-enabled software provider or hiring vendor. Second, Congress should amend Title VII to explicitly define the meaning of artificial intelligence. These two proposals should be integrated into the following amendment for section 701(c):,

The term 'employment agency’ means any person regularly undertaking with or without compensation to procure employees for an employer or to procure for employees opportunities to work for an employer and includes an agent of such a person, including any person who uses 'artificial intelligence’ or 'AI’ within the meaning set forth in 15 U.S.C. 9401(3) (current language is underlined, and emphasis is added to highlight my proposed amendment).

While my analysis has focused on language in Title VII, the 1991 Civil Rights Act, state privacy laws, and relevant caselaw, I conclude with a depiction of a sample applicant for a caregiver position as evaluated by Highmatch, a provider of an AI screening technology. On the surface, the following graphic illustrates why this technology is useful and convenient. It offers a clear rating, it evaluates the applicant on seemingly relevant dimensions, it does not show a photo (which would allow a person's appearance to bias a hiring decision), and its recommendations are cautiously advisory but not overly prescriptive.

But on closer inspection, this aura of rationality and neutrality is suspect. Adding up the rating scores, the applicant's total score is 367, summed on 5 job relevant dimensions. But when the total is divided by 5, the resulting score is 73.5. If, on the other hand, the zero-scoring for “patient care approach” is dropped from the calculation, the resulting score is 93.75. If both scores are summed and divided by 2 to offer a blended average, the result is 82.63.

The question, therefore, is how does this technology compute a rating of eighty-six that has no obvious connection to the data on the dashboard? Another possibility is that Highmatch assigns unequal weights to these five factors. In the Deyerler complaint, for example, one allegation states: “According to Hirevue facial expressions can make up twenty-nine percent of a candidate's employability score.” Yet another possibility is that Highmatch uses additional factors, albeit elements with less impact on the overall score.

An employer who relies on this technology has no obvious way to know how a score is calculated but could be favorably influenced by Highmatch's seemingly accurate predictor of good versus marginal applicants. Then, there is the potential problem of construct validity. The EEOC specifically addresses this concern, stating in part: “Evidence of the validity of a test or other selection procedure by a criterion-related validity study should consist of empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.” Nothing in Highmatch's prototype report demonstrates compliance or awareness of this Title VII compliance regulation.

My critique of Highmatch is not proof of any problem. However, it illustrates the main problem that my Article addresses. AI hiring platforms are neutral, rational, and data-driven, but in ways that are opaque and possibly misleading. Even with my several attempts to break down this prototypical applicant's score, there is no clear explanation for Highmatch's derivation of an applicant score.

Laws that focus on AI hiring technology--for example, Illinois' Artificial Intelligence Interview Act or Biometric Information Privacy Act--are useful for protecting personal data but fail to regulate a technology's internal algorithms. That task may be too daunting and outside the competence of legislatures that have concerns about AI hiring discrimination.

But the amendments I propose for Title VII offer a more effective approach to address intentional discrimination and the disparate impact in job ads that social media platforms disseminate. While my proposals are narrow in scope, and sparse in words, they recognize the fundamental reality that online hiring platforms function as modern-day employment agencies that procure applicants for employers.


 

Michael LeRoy is the LER Alumni Professor of Labor and Employment Relations and affiliated faculty of the College of Law at the University of Illinois, Urbana-Champaign.