Trump’s Disparate Impact Blow Makes AI Bias Claims Even Tougher – Bloomberg Law News

This post was originally published on this site.

Already scarce enforcement of AI-based workplace discrimination cases will decrease further as the Trump administration plans to cut back the use of disparate impact theory in bringing bias suits.

By their nature, artificial intelligence tools muddy the questions of who makes employment decisions and how, leaving bias claims likely to fall under a disparate impact or unintentional bias doctrine, rather than disparate treatment, employment lawyers say. Disparate impact under Title VII of the 1964 Civil Rights Act covers discrimination resulting from criteria or tests that appear neutral but disproportionately exclude people by traits like race or gender.

The Justice Department and federal agencies’ stated intent to reduce the use of disparate impact could mean decreased risk of penalties for employers increasingly using AI to partly automate recruitment and hiring. For workers, the shift could mean an already tough-to-prevent form of bias becomes harder to police.

With AI tools, “it’s not necessarily that an employer is intending to exclude particular groups, but they’re using a software that does have that effect,” said Shelby Leighton, senior attorney at Public Justice, who represents plaintiffs in bias cases. “Disparate impact liability is a really important tool that we use to ensure that there’s not discriminatory AI tools being used.”

Attorney General Pam Bondi recently instructed Justice Department attorneys to narrow use of disparate impact, building on conservative advocates’ longstanding criticism that it unfairly lowers the bar to sue employers.

The Equal Employment Opportunity Commission removed some website references to AI bias and disparate impact claims, while it reviews them in line with a Trump order that revoked Biden-era AI policies.

Now-scrubbed guidance endorsed a four-fifths rule meaning if a particular group’s selection rate is less than 80% of the most favored comparable group, then AI-based disparate impact may be present.

The EEOC declined to comment, and the DOJ didn’t respond to a request.

The lack of the usual traceable human decision-making process in AI tools makes bias claims challenging, particularly disparate treatment. This leaves disparate impact analyses of hiring results as an often more viable route.

Limited transparency around AI decision-making tools “is the number one barrier to enforcement right now,” said Matthew Scherer, senior policy counsel at the Center for Democracy & Technology.

“Most workers have no idea when AI is being used in hiring decisions, or if they do know in some generic way, they have no idea what the AI system is measuring,” he said.

Employer AI

Federal action to combat AI-related bias already has been limited, with almost no concrete enforcement or regulation, despite guidance documents issued under Biden, said Alice H. Wang, an employment attorney with Littler Mendelson PC in San Francisco.

Companies’ use of AI tools in hiring “has really just exploded in the last year or two, and we honestly have not seen a commensurate uptick in litigation or agency charges from the EEOC,” she said.

With the federal government losing the little traction it had, the litigation onus may fall more on state agencies and plaintiffs’ lawyers.

Chief among the few existing suits is in San Francisco federal court, where Derek Mobley seeks class-action status on claims that employers’ use of Workday Inc. software to evaluate job applicants yielded bias against workers based on race, age, and disability. Mobley applied to more than 100 jobs using Workday’s platform and was rejected for each, he says.

The EEOC supported Mobley’s case in April 2024, urging the court to recognize Workday as an employment agency covered by Title VII. It hasn’t publicly updated that position, although its brief referenced its own prior AI guidance, now under review.

A federal judge allowed Mobley’s disparate impact claims to advance in July 2024, but granted Workday’s motion to dismiss the disparate treatment claims.

“We disagreed, but the court found you can’t really point to one decisionmaker or a group of decisionmakers that are making these decisions. And that’s an important component of intent,” said Roderick T. Cooks, a Birmingham, Ala.-based attorney representing Mobley.

Workday denies the allegations, a spokesperson said. The company has argued in court that as a software provider it shouldn’t be held liable for alleged discrimination.

“We’re the canary in the coalmine on this,” Cooks said.

New Tests

A new test case came March 19, when a deaf, indigenous woman filed bias charges against her employer Intuit Inc. and automated video interview provider HireVue Inc., which she said disadvantaged her when applying for a promotion.

She filed charges with the EEOC and the Colorado Civil Rights Division, hoping for a robust investigation at least at the state-level agency, if not both, said Leighton, one of her attorneys. The charges allege Intuit’s use of HireVue had a disparate impact based on race and disability.

Depending on the investigation’s outcome, the case could also include disparate treatment claims, Leighton said.

HireVue and Intuit denied the allegations.

The ACLU also filed claims of discriminatory impact from Aon Consulting Inc.‘s job candidate evaluation tools with the EEOC and Federal Trade Commission in 2023 and 2024, saying its algorithm-based personality test and other assessments are prone to adversely impact workers of color and those with disabilities. Aon disputed the claims last year, saying its tools follow best practices and legal guidelines to avoid bias.

While disparate impact claims are the logical fit for most AI bias cases, disparate treatment claims can’t be ruled out, Leighton said.

For example, “if you know your software is excluding Black people disproportionately and you continue using it, it sort of looks like you want to exclude Black people,” she said.

Whatever happens with federal enforcement, employers should assess AI tools for bias, as the risk of private litigation and state-agency enforcement remains, said Melanie Ronen, labor practice chair at Stradley Ronon Stevens & Young LLP in Long Beach, Calif.

“You will see people testing it,” she said. “You’re going to see claims start to percolate.”