AI Hiring Tools Risk Discrimination, Watchdog Tells Congress – Bloomberg Law News

This post was originally published on this site.

One company allegedly programmed its AI-powered software to reject women job candidates over age 55 and men over 60, the agency that enforces workplace anti-discrimination laws told Congress.

Another company allowed customers to systematically prevent American citizens from applying for open jobs through its website, the agency alleged.

While those cases were resolved without admitted wrongdoing, the Equal Employment Opportunity Commission told federal lawmakers they show how companies could violate antidiscrimination laws by relying on AI and algorithms.

In a report obtained by Bloomberg Government through a Freedom of Information Act request, the commission also warned US lawmakers that it will need to build up more resources to educate employers and conduct investigations.

A growing dependence on “AI to manage the workplace has the potential to outpace our nation’s capacity to ensure that they are deployed in a manner that comports with federal anti-discrimination laws,” the EEOC report said.

The six-page report, which proposes enhancing the digital and technological capacity of the agency, was sent to the House and Senate Appropriations Committees in June at lawmakers’ request.

The EEOC did not specifically request more money from Congress. But its warning comes as AI technologies and the hype around them grow exponentially, making the monitoring of these tools increasingly difficult. AI experts say that some of these tools replicate existing biases and turbocharge them.

The threat of automated discrimination could add to an overburdened EEOC. The agency has seen a long-term drop in its workforce. Its 2,173-person staffing level in fiscal 2023 is down 36% from its 1980 workforce of 3,390.

The agency was flat-funded at $455 million last year, prompting officials to warn of a possible furlough, though they managed to avoid one. The agency asked Congress for a 7% raise in fiscal 2025, but the Senate proposed another flat budget and the House proposed a nearly 8% cut.

The Biden administration issued an executive order on AI last year and federal agencies are talking about how to regulate the new technology. But Congress hasn’t passed major AI legislation, unlike the European Union, which already has a law governing AI use.

Concerned Lawmakers

The Covid-19 pandemic accelerated interest in AI’s business applications, as companies looked to manage their workforce more efficiently, the EEOC report said.

In “the post-pandemic era,” there’s an increase in AI and machine learning used for “monitoring employee activities and performance for promotion or termination, assessing productivity, or setting wages,” the report to Congress says. While the technology can make a business more efficient, it also “may violate workplace nondiscrimination laws,” it warns.

In one case settled by the EEOC and included in its June report to House and Senate appropriators, iTutorGroup Inc., an English-language tutoring company, allegedly programmed its application software in 2020 to automatically reject female applicants over 55 and male applicants over 60 for online tutoring of students in China. More than 200 qualified tutor applicants in the US weren’t hired because of their ages, the EEOC charged, and a consent decree provided for $365,000 in damages.

Another company allegedly operated a tech job search website called Dice.com that allowed customers to post positions that banned applicants of US national origin, according to the memo. The company, DHI Group Inc., agreed to compensate the complainant and “rewrite its programming to ‘scrape’ for potentially discriminatory keywords.”

“AI is subject to existing federal laws that protect workers and consumers from civil rights violations and other harms,“ EEOC spokesman Victor Chen said in an email, but the commission declined to make any official available to talk on the record.

In March, a Bloomberg investigation found that OpenAI’s GPT 3.5 favored names from some demographics more often than others when sifting through fictitious job applications.

“We are behind the ball when it comes to understanding the complexities of rapidly growing technology; without action, we’re at considerable risk of entrenching the inequities of the past in the technology of the future,” said Rep. Yvette Clarke (D-N.Y.) in a statement. Clarke, a vice chair of the Congressional Black Caucus, has called AI bias the civil rights issue of our time.

Clarke was one of the early Capitol Hill movers on AI, introducing a bill (H.R. 5628) in 2023 to require companies to conduct impact assessments when they use artificial intelligence to make decisions regarding hiring, housing, credit, and education. It would also add Federal Trade Commission staff to enforce the law. A similar bill (S. 2892) was introduced in the Senate by Sens. Ron Wyden (D-Ore.) and Cory Booker (D-N.J.).

Expertise and Money

AI tools that help streamline hiring are sometimes trained on the past decisions of hiring managers, said Alberto Rossi, a Georgetown University professor who directs the university’s AI, Analytics, and Future of Work Initiative. So if some hiring managers—for instance—”really disliked people with glasses” based on a photo in their application, the algorithm will follow suit and “discard all those people.”

These kinds of personal biases are scattered in general, he said, but use of AI tools can make them more prevalent. “Now, what happens when you put it and code it up into an algorithm?” Rossi said. It raises the risk that many more companies are going to make systemically biased decisions, he said.

Hiring people with expertise in AI could help the agency proactively address problematic practices, said Matt Scherer, senior policy counsel for workers’ rights and technology at the Center for Democracy and Technology.

Technical experts will be easily able to flag problems, Scherer said. “And they will put things on the radar of what cases to investigate and enforce that might otherwise get overlooked because the people who have been there for a long time are not naturally kind of wired to look for those discrimination cases.”

The EEOC sent the congressionally requested report to members of the House and Senate Appropriations Commerce-Justice-Science Subcommittees in June. The leaders of those panels—Sens. Jeanne Shaheen (D-N.H.) and Jerry Moran (R-Kan.), and Reps. Hal Rogers (R-Ky.) and Matt Cartwright (D-Pa.)—didn’t respond to requests for comment.

In a letter accompanying the report to Congress, Jacinta Ma, the head of the EEOC’s Office of Communications and Legislative Affairs, wrote: “With the preponderance of algorithmic technology in employment decisions, it is critical that the Commission have the resources to keep pace with the use of these increasingly sophisticated tools and their potential impact on equal employment opportunity.”

The agency has not announced any plans to create additional resources as it talks to experts about investigating biases, the report said.

Former Republican EEOC Commissioner Keith Sonderling, who left the agency this summer after his term ended, said a strong push to mitigate hiring discrimination through AI has to start at the EEOC with the resources available—regardless of what is approved by Congress.

“The agency can’t rely on or wait for Congress to give it more money to hire technologists to investigate the technology,” Sonderling said.