AI Hiring Tool Faces Lawsuit Over Applicant Screening

Job Applicants Challenge AI-Based Hiring Software

A group of job seekers has filed a landmark lawsuit against Eightfold AI, a California-based company that offers artificial intelligence-driven recruitment tools. The plaintiffs allege that the software unfairly screens potential candidates and violates federal consumer protection laws, specifically the Fair Credit Reporting Act (FCRA).

As companies increasingly rely on artificial intelligence to streamline hiring processes, concerns are growing about transparency, fairness, and accountability. The plaintiffs argue that AI recruitment tools should be held to the same standards as credit reporting agencies due to their role in determining employment eligibility.

How the AI System Works

Eightfold AI’s platform uses data sources such as LinkedIn to build extensive profiles of job seekers. According to the company, its system incorporates over one million job titles, one million skills, and information on more than one billion individuals worldwide. When a candidate applies for a position, the software assesses their qualifications and assigns them a score from one to five based on the job’s requirements.

Critics argue that this scoring system acts as an algorithmic gatekeeper, filtering out applicants before any human recruiter reviews their résumés. With no feedback provided, candidates are left in the dark about how decisions are made—or even whether mistakes occurred in their evaluation.

The Plaintiffs’ Perspective

Erin Kistler, one of the plaintiffs, holds a computer science degree and has decades of experience in the tech industry. Despite submitting thousands of applications over the past year, only 0.3 percent resulted in further communication or interviews. Several of her applications were processed through Eightfold’s software.

I think I deserve to know what’s being collected about me and shared with employers,” Kistler said in an interview. “They’re not giving me any feedback, so I can’t address the issues.”

The suit, filed in Contra Costa County Superior Court, seeks class-action status and aims to compel Eightfold to disclose the data it collects and how it is used. It also seeks financial damages and demands compliance with consumer reporting laws.

David J. Walton, a Philadelphia-based attorney who advises employers on AI matters, noted that hiring software differs from credit scoring tools in several ways. However, he acknowledged that the legal landscape is still evolving. “These tools are designed to be biased—to identify a certain type of candidate—but not to discriminate unlawfully,” Walton said. “That’s a very fine line.”

The lawsuit is being pursued by law firms Outten & Golden and Towards Justice, with support from former attorneys at the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission. David Seligman, executive director at Towards Justice, emphasized that existing laws already cover these technologies. “There is no AI exemption to our laws,” he said. “These companies often violate rights under the guise of innovation.”

The FCRA, enacted in 1970, requires any organization collecting personal data for employment decisions to disclose that information to the individual and offer a way to dispute inaccuracies. It defines a “consumer report” broadly, including any data used to evaluate eligibility for employment or financial services.

This case follows a growing trend of legal scrutiny toward AI tools in hiring. In 2023, another lawsuit targeted Workday, alleging that its software discriminated against older applicants, people with disabilities, and Black candidates. A federal judge allowed that case to proceed as a collective action, potentially involving millions of job seekers.

Workday denied the allegations, stating that its AI recruiting tools are not trained to identify or use protected characteristics like race, age, or disability.

Regulatory Uncertainty

Regulatory agencies have also weighed in. In 2024, the Consumer Financial Protection Bureau issued a guidance note asserting that AI-generated hiring scores fall under the FCRA. However, under the Trump administration, the bureau reversed this stance, creating further ambiguity.

Jenny Yang, a former chair of the Equal Employment Opportunity Commission and one of the attorneys representing the plaintiffs, said the commission began examining algorithmic hiring over a decade ago. “People were getting rejected in the middle of the night and nobody knew why,” she explained. “We realized these tools were fundamentally changing how people were hired.”

Legal experts expect more challenges like this as AI continues to influence employment practices. With no clear regulatory framework, companies and applicants alike are navigating uncertain territory.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter