Excerpted From: McLean Waters, Fair Admissions, Fair Decisions, and Fair Outcomes: An Analysis of Algorithmic Bias in Education, Employment, Healthcare, and Housing, 25 North Carolina Journal of Law & Technology 657 (May, 2024) (170 Footnotes) (Full Document)

McLeanWatersImagine during the COVID-19 pandemic, a young professional was furloughed from her job. Eventually, her company reinstated the role and asked the candidate to re-apply for the same role for which she was well-qualified. However, the candidate was required to conduct a video interview using an artificial intelligence (“AI”) platform, which scores an individual's responses to questions as well as their body language during the interview. While the candidate had all the skills necessary for this role, the AI system scored her body language so poorly that she did not receive the job. The candidate then came to find out that this system was trained on the faces and voices of white male applicants, leading to consistently lower scores for females and people of color.

For Anthea Mairoudhiou, a make-up artist, this hypothetical scenario was her reality. She was not rehired after HireVue, an AI-screening program, scored her body language poorly. Mairoudhiou's story is not an isolated incident, but one of many stories of individuals experiencing discrimination from AI systems. In the midst of massive adoption of AI in recent years, AI discrimination is increasing at an alarming rate, despite AI leading to improvements and advances in areas such as “speech recognition, natural language processing, translation, ... computer programming, and predictive analytics.” As developments in AI have accelerated exponentially, stakeholders have expressed concerns over how this type of technology will be used. Previous technological advances, such as email, word-processing, and better electronic databases, have all improved efficiency in the workplace by assisting with simple and routine tasks. AI, however, has the capability to replace non-routine cognitive tasks, such as information categorization or sorting information into flexible classifications, using “highly sophisticated algorithmic techniques to find patterns in data and make predictions about the future.” AI's technological advances can be both beneficial--by assisting workers with exhausting tasks--and detrimental-- by eliminating jobs or “degrading work quality.”

AI often operates in these gray areas, where using the technology has clear benefits and harms. To illustrate this idea, consider the world of employment and hiring. A recent study “found that 83% of human resources leaders rely in some form on technology in employment decision-making.” While AI tools are beneficial, they need regulations and limitations. For example, the use of AI in hiring can speed up the process and potentially eliminate bias if the algorithms are correctly crafted. On the other hand, AI tools can replace human jobs and, more importantly, the human approach to certain roles. Moreover, AI is only as great as the algorithm used to create it, meaning creators could--either intentionally or unintentionally--introduce their own bias and subjectivity into the algorithm. To prevent hiring bias from appearing in AI, some legislatures are requiring companies and organizations to place limitations on how AI is used during the hiring process and on the underlying algorithms of the technology itself.

Even without ill-intent, bias and discrimination often seeps into AI systems in several ways. Consider what would happen if a company were to input the resumes and applications of all its highest performing employees into an algorithm that was designed to find applicants who matched those profiles. If the company had historically hired mostly men instead of women, then the algorithm could take this data and create a preference for male applicants, when the reality was that women had just been given less opportunities. This was the case for Amazon, which developed an AI hiring algorithm that downgraded female applicants because most of the data used to teach the algorithm came from male applicants. In addition, if all the company's highest performing employees were older, the algorithm may disfavor younger applicants, disregarding the time it took for those high performing applicants to achieve their status. Reliance upon historical data sets for machine learning exemplifies some of the potential issues surrounding AI today.

While the world of employment illustrates some of the pros and cons of AI, it is only one of numerous industries that experience the benefits and drawbacks of using this technology. As AI usage promulgates across the country, the potential for AI to be used in a way that leads to discrimination also grows. As previously mentioned, AI algorithms force creators to make choices about which data sets are preferred and not preferred to achieve desired results. The results of these choices vary across industry and can appear to provide the best candidates for a job, determine the best applicants for housing, or analyze which patients Medicaid should cover. Regardless of the industry, AI can incorporate individual or collective bias into its algorithm, which can lead to discrimination.

This Article demonstrates how AI incorporates unacceptable discrimination. Examples of human and technological discrimination come from analogies to the recent Supreme Court decision in Students for Fair Admissions v. President Fellows of Harvard College and through current examples of companies using AI to discriminate or AI use resulting in discrimination. Based on the overwhelming potential for discrimination in AI, this Article sets forth regulations on algorithm creation and continual audits to ensure that AI algorithms and systems do not repeat discrimination mistakes of the past.

This Article proceeds in six parts. Part II explores AI generally and identifies potential ways that discrimination creeps into AI algorithms. Part III exemplifies how simple inputs in decision-making processes can lead to large scale discrimination without the use of technology through a review of the recent Supreme Court decision in Students for Fair Admissions v. President & Fellows of Harvard College. Part IV considers the existing federal anti-discrimination framework and discusses the ways that those laws interact with one another. Part V surveys modern-day examples of AI usage resulting in discrimination across numerous different industries. Part VI explores current legislation at both the state and federal level which could provide guidance on which solutions may be best suited for regulating AI. Finally, Part VII of this Article examines the current state of regulation surrounding AI and recommends legislation that should be introduced to properly regulate this emerging area of law.

[. . .]

As the world continues to move toward a greater reliance on technology, it is important to ask the question of when society should continue to forge ahead and when it should slow down. Technological advancements have continued to fuel innovation and impact lives in a positive way, whether it is through increasing the efficiency of work, connecting individuals across the globe, or empowering better decision-making. However, as society continues to advance technologically, it is necessary to consider whether these new advancements are helping or hurting society at large.

Whether it is healthcare, employment, or education, no industry is immune from the potential pitfalls of AI and the ways that bias can creep into those systems. In light of this, the federal government must create a regulatory framework to prevent both disparate treatment and disparate impacts on protected classes. Regulating algorithm creation and requiring periodic bias audits are the proper first steps to ensuring that AI is used in a manner that prioritizes people over profit and prevents widespread harm from being embedded in systems used on a daily basis.

When Dr. Martin Luther King Jr. said, “[i]njustice anywhere is a threat to justice everywhere,” he was in no way, shape, or form referring to AI. However, when you consider the impact that a small amount of bias can have on an AI algorithm that is used by millions, Dr. King's words still ring true. A small amount of injustice, combined with a system that can transmit that injustice anywhere, is truly “a threat to justice everywhere.”

J.D. Candidate, University of North Carolina School of Law, 2025.