Opinion: Has AI fueled racist, sexist and ageist corporate hiring practices
02/12/2026For millions of job applicants, often the biggest hurdle isn’t a lack of talent — it’s a digital filter that is blind to talent it hasn’t been programmed to “see.”
We have entered a dangerous era where Applicant Tracking Systems (ATS) act as digital gatekeepers, hidden biases that may silently purge candidates based on age, race, gender or disability before a human ever sees their name.
The extent to which these artificial intelligence platforms are now being used in hiring is staggering:
99% of Fortune 500 companies use AI tools and ATS to screen, filter and rank applicants.
87% of all companies use AI at some point in the recruitment process.
75% use AI to reject candidates automatically without a human ever seeing the application.
Two recent lawsuits pulled back the curtain on how AI powered hiring tools are failing American workers and companies.
Last May, a federal court allowed a nationwide case to proceed against Workday, one of the world’s largest human resources technology platforms. The plaintiff, Derek Mobley, alleges that Workday’s AI powered applicant screening system systematically discriminated against job applicants over age 40. Despite being highly qualified, he was rejected for more than 100 jobs that he applied to through Workday. Many of the rejections arrived within minutes of submission or in the middle of the night, suggesting that a human never reviewed his applications.
He claims AI models were trained on historical hiring data, reflecting past human bias. For example, if a company historically hired young workers, the system learned that “youth” was a characteristic of a “successful” candidate and, therefore, disproportionately rejected older applicants. According to the lawsuit, to estimate age, the AI system used proxies for age, such as graduation date, years of experience, or older software proficiencies.
Individuals who believe their Workday applications might have been screened out based on their age — any time after Sept. 24, 2020 — may want to join the lawsuit by completing the form at
Workdaycase.com; the deadline is March 7.
Employers often think they are safe as long as they don’t “tell” the AI platform to discriminate. They are wrong.
The Equal Employment Opportunity Commission (EEOC), which enforces federal anti-discrimination statutes, said in 2023 that an employer may be held liable for unlawful discrimination even if the selected procedure was designed by an outside vendor.
The employer also may be held liable under Connecticut’s Fair Employment Practices Act (CFEPA). Connecticut is one of the just 20 states that extends liability to those who “aid and abet” unlawful discrimination.
The legal battle also is moving to the consumer protection arena. In late January, a group of job applicants filed a class action lawsuit against Eightfold AI, a venture capital-backed hiring platform used by many Fortune 500 companies, alleging that Eightfold created secret profiles and ranked workers without their knowledge, consent or opportunity to correct errors before the reports were used to screen them for jobs. The complaint alleges that Eightfold scrapes data from public sources to build detailed profiles with labels such as “team player” and “introvert.”
The plaintiffs allege that AI evaluations function as consumer reports under the Fair Credit Reporting Act (FCRA), which would mean you have a legal right to access and contest your “score” just like you have a right to see and contest your credit score. Another possible avenue that could be applied to algorithmic bias is Connecticut’s Unfair Trade Practices Act. CUTPA is one of the most powerful consumer protection laws in the country. It could be used against the software vendor if it engaged in deceptive or unfair acts in trade or commerce.
Beyond the threat of litigation, employers can start to take steps to prevent algorithmic discrimination by conducting regular third-party bias audits, implementing “human-in-the-loop” oversight to catch biased patterns, and cleaning training data to remove hidden “proxies” for age, race, gender and disability.
Meanwhile, the Connecticut legislature must step up to support the efforts of state Sen. James Maroney (D-Milford), who has become one of the most prominent voices in the country regarding state-led AI regulation. He recognized as early as 2023 that without clear guardrails, the “automatic gatekeeper” would inevitably become a tool for discrimination.
Maroney introduced bills in the 2024 and 2025 legislation sessions that were intended to be a comprehensive framework for AI governance in areas such as hiring, housing and lending. Despite passing the state Senate with bipartisan support in both years, the bill failed to become law. Nevertheless, Maroney continues to lead a multi-state working group of legislators attempting to harmonize AI laws across state lines.
With the Trump Administration’s “hands off” approach to AI regulation and despite the administration’s threats to challenge state laws which it deems “inconsistent” with federal policies, it is more important than ever that states act now to ensure innovation does not come at the cost of basic civil rights.
When equipped with rigorous safeguards, AI has the potential to reduce unlawful discrimination by masking identifying characteristics such as names, ZIP codes and graduation years that often trigger conscious or unconscious human prejudice or stereotypes. By shifting the focus exclusively to objective data-driven skill sets, the system could help recruiters identify the true top candidates. That will not happen unless and until employers stop viewing hiring as merely an outsourced function it can blame when discriminatory patterns persist or emerge.
As cases such as Workday and Eightfold AI illustrate, the burden is increasingly shifting to workers to identify and challenge algorithmic bias. For now, the courtroom remains an essential avenue for ensuring “efficiency” is not used as a mask for unlawful discrimination. Understanding one’s rights in the new digital era is the first step toward holding the invisible gatekeepers accountable.
Attorney Gary Phelan is a partner at Hurwitz Sagarin & Slossberg LLC, in Milford. He is co-author of “Disability Discrimination in the Workplace” and teaches Negotiation and Disability Law at Quinnipiac University School of Law.