Employment Law Update: AI Hiring Under Fire: Algorithmic Screening Enters The Chat

This post was originally published on this site.

A major class action lawsuit filed in January 2026 is reshaping the legal landscape around AI-powered hiring tools and algorithmic bias is not the basis of the lawsuit. In Kistler et al. v. Eightfold AI Inc., filed in California’s Contra Costa County Superior Court, the plaintiffs allege that Eightfold AI scraped personal data on over one billion workers, scored applicants on a zero-to-five scale, and discarded low-ranked candidates before any human reviewed their applications.

The lawsuit, brought by former EEOC chair Jenny R. Yang and the nonprofit Towards Justice, does not claim the algorithm was biased; it claims the algorithm existed in secret. The plaintiffs’ theory rests on the Fair Credit Reporting Act (FCRA), which mandates specific procedures, including disclosure, access, and the opportunity to dispute errors, when companies compile “consumer reports” for employment decisions. Because the FCRA theory does not require proving discriminatory outcomes, it offers a more accessible avenue of challenge. With statutory damages of $100 to $1,000 per willful violation applied to a database of a billion profiles, the financial exposure is astronomical.

The Eightfold case gains further significance alongside Mobley v. Workday, in which a federal judge held that Workday acted as an “agent” of the employers using its automated screening tools, triggering direct liability under the Age Discrimination in Employment Act. Together, these cases form what commentators describe as a “pincer movement”: Workday establishes that the vendor is an agent liable for discrimination, while Eightfold frames the vendor as a consumer reporting agency subject to transparency mandates. One attacks outcomes; the other attacks process. Both suggest that AI hiring vendors may no longer shield themselves behind the argument that they provide neutral tools.

For employers, these developments intensify the AI vendor “liability squeeze.” Industry data underscores the risk: 88% of AI vendors cap their own liability, often to monthly subscription fees, while only 17% warrant regulatory compliance. An employer’s platform may scrape data from unknown sources, score candidates using opaque logic, and filter applicants before any human review—yet vendor agreements typically cap liability, disclaim compliance warranties, and restrict algorithmic audits.

Employers should take concrete steps now to close the gap between contractual protections and actual legal exposure. Vendor contracts should require transparency on data sources, independent audit rights for bias and FCRA compliance, training data indemnities, and carve-outs for regulatory fines, litigation, and class-action settlements from standard liability caps. Organizations should also establish governance infrastructure, including AI hiring oversight spanning HR, legal, IT, and compliance, pre-procurement vendor due diligence, and periodic adverse impact analyses under the EEOC’s four-fifths rule. Equally important is documentation, which would include AI governance and use policies, impact assessments, vendor due diligence files, and human oversight and override logs. These establish compliance and show that an organization takes its obligations seriously.

The legal environment around AI hiring tools is shifting rapidly in a direction that places increasing risk on employers. Courts are treating AI vendors as agents and consumer reporting agencies, state AI employment laws are proliferating, and the gap between contractual protection and legal exposure is widening. Employers who assume their vendor agreements insulate them from this risk may face significant liability. Organizations best positioned to weather this shift are those that can explain how their AI hiring tools work, identify what data feeds them, and demonstrate meaningful oversight.

Leave a Reply

Your email address will not be published. Required fields are marked *