Advising Employers as AI Meets DEI and Discrimination | Troutman Pepper Locke – JDSupra

This post was originally published on this site.

Published in Law360 on November 21, 2024. © Copyright 2024, Portfolio Media, Inc., publisher of Law360. Reprinted here with permission.

This article provides guidance and best practices for counseling employers on key employment discrimination and diversity, equity and inclusion-related legal issues associated with using artificial intelligence tools.

Specifically, this article addresses potential antidiscrimination DEI-related benefits and risks of AI.

Potential Antidiscrimination and DEI-Related Benefits of AI

Many employers have increasingly turned their focus to enhancing their DEI and antidiscrimination efforts, striving to improve their ability to attract, retain, support and promote a diverse workforce. AI tools can help employers achieve their DEI-related and antidiscrimination goals in several ways.

One potential benefit of AI tools as opposed to traditional human review is that — at least in theory — they should be without bias. After all, algorithms do not have the same experiences that humans do that may cause them, explicitly or implicitly, to value candidates or employees of certain identities over others. Humans may exhibit unconscious bias toward certain individuals or classes of individuals who they feel are similar to themselves.

Employers can use AI tools to offer training, career path guides or other employee-specific tools to help employees of all identities advance their careers, improve skills or be matched with appropriate job opportunities.

Potential Discrimination and DEI-Related Risks to AI

Though when used thoughtfully, AI can help employers improve their antidiscrimination and DEI efforts for all the reasons noted above, there are also certainly many potential risks. Perhaps the biggest DEI-related risk associated with the use of AI tools is potential employment discrimination.

Federal and state laws governing discrimination apply regardless of whether an employer’s employment decisions are performed solely by humans or performed entirely or with assistance of AI technology. Thus, it is critical that you advise employers using AI tools that they must always have and maintain a solid understanding of their obligations with respect to applicable discrimination laws, and that these obligations do not disappear when deploying an AI tool.

Although a full overview of applicable employment discrimination laws is beyond the scope of this article, there are a variety of federal and state laws that govern employers in this space. For example, under federal law:

  • Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex — including pregnancy, sexual orientation and gender identity — or national origin.

  • The Age Discrimination in Employment Act prohibits discrimination against individuals 40 or older.

  • The Americans with Disabilities Act prohibits discrimination based on mental or physical disability.

  • The Genetic Information Nondiscrimination Act prohibits discrimination based on genetic information.

These and other discrimination laws may prohibit two types of discrimination. The first is disparate treatment, occurring when there is intentional discrimination against one individual because of their membership in a class protected by law. The second is disparate impact, occurring when a facially neutral policy or practice — including selection procedures or tests — unduly disadvantages individuals based on their membership in a protected class.

Though the use of AI could, in theory, implicate either type of discrimination, it is this second category that creates the biggest trap for the unwary employer. Because it is often difficult to understand why or how an algorithm made a particular decision, it may be more difficult for employees or candidates to show that an employer intentionally discriminated via the algorithm. But that same point may likewise make it harder for an employer to offer a solid nondiscriminatory reason for the decision.

How AI Tools Might Increase the Risk of Discrimination

An employer acting without discriminatory intent in employing an AI tool — even an employer using such a tool with the hope of increasing diversity — can still put itself at risk of discrimination claims due to the nature of the technology and the contexts in which it may be used.

AI systems are only as good as their inputs. If an AI system is trained on biased or unrepresentative data, it runs the risk of replicating that bias. Existing data sources may reflect prior or existing bias — or even just historical underrepresentation of certain groups.

If an AI-powered tool ingests such data as its training source, it may inadvertently amplify, rather than mitigate, such bias. As an AI tool’s algorithm learns, in other words, there is a risk that the model will continue to reflect a lack of representation of underrepresented groups or favor historically represented groups.

AI systems are also only as good as the humans who create them. Thus, AI bias may also arise from programming errors, wherein a developer may place emphasis on certain factors, either mistakenly or due to their own biases.

For example, a resume-screening tool might be programmed to automatically reject candidates with gaps of a certain amount of time reflected in their employment history. While a well-intentioned programmer might believe this would result in filtering out unreliable candidates, it could also easily result in inadvertently filtering out individuals who had to take time out of the workforce due to medical conditions, disabilities or childbirth.

Or consider an algorithm that recruits for new candidates based on location. If the algorithm is programmed to favor certain ZIP codes over others, prioritizing historically historically white or affluent neighborhoods neighborhoods may lead to inadvertent discrimination.

AI tools, especially generative AI tools, are trained on large volumes of text. This may include a variety of publicly available text from sources, such as social media or government websites, that may contain information about employees or candidates that employers traditionally should not consider or cannot legally ask about, such as age, sexual orientation, medical conditions or genetic information.

If an employer uses an AI-powered tool to analyze or assess its existing employee population or data, it’s vital that you counsel them to think critically about what they might learn when the analysis is complete.

For instance, a pay analysis tool might reveal that an employer’s pay for a given role is consistently below market. This may be helpful information ultimately, but it can also create employee dissatisfaction. Or such a tool might reveal inadvertent differential treatment among protected classes.

Note, too, that even if AI tools used by a given employer have been vetted and tested to ensure they do not contain inadvertent bias, headlines about programs that do contain such bias, and laws and guidance designed to mitigate it, may lend credence to the perception of bias. This may make employees or candidates, particularly those in historically underrepresented groups, wary.

Many new tools are coming out quickly, which may further lead to the perception that they are unvetted or may be unfairly or discriminatorily used. To the extent that this makes historically underrepresented groups feel further isolated or marginalized, employers looking to increase their DEI efforts should tread cautiously.

Risks of Discrimination Claims and Lawsuits

Although this is a fast-developing area of the law, you should counsel employers that just because claims asserted may be based on novel technologies does not mean that the risk of such claims is theoretical.

As more and more employers use more and more types of AI-assisted technology in various parts of the employment relationship, these types of lawsuits may only continue to proliferate. While all employment-related litigation presents risk, there are certain risks particular to claims associated with the use of AI, including the following.

Many Potential Plaintiffs

The use of software affects many employees quickly. Instead of just one individual hiring manager who might make unlawful decisions from time to time, or even one bad apple who intentionally makes such decisions individually, hundreds of employees may be affected at once, and repeatedly, by decisions made by an algorithm.

Class Action Risk

This almost means that there is a higher risk of class action claims. Given their nature, disparate impact claims are more commonly brought as class actions.

Class action litigation presents a host of risks for employers, as everything from discovery to potential settlement becomes more complicated and thus more expensive to manage.

Discovery Challenges

It is not always clear how AI tools make their decisions; as noted, algorithms can be a black box, so attempting to unravel their decision-making can be anything but straightforward. This means that audits or records supporting or explaining how an AI tool reached its decision may be difficult — or even impossible — to collect, review, preserve or produce.

This difficulty may be compounded by the fact that such tools are often third-party programs that are licensed by an employer. Unlike an individual hiring manager whose notes or records would be more likely to be readily available to their employer in the event of a lawsuit, third-party records may not be as readily available. Thus, even if electronic records exist for a given AI platform, an employer may face difficulty in securing data relevant to a claim if it is not in the employer’s possession.

All of this may add complexity to any AI-related claim and likewise, increase the cost of defense.

Additional Considerations

Beyond compliance with antidiscrimination laws and guidance, employers should be aware that there are other traps for the unwary in the use of AI.

As noted, AI tools aggregate huge amounts of data to make decisions. Sometimes, more data does not mean better decision-making. Some of this data should be kept confidential or may just be sensitive in nature. This could include information regarding medical conditions or treatment, employee leave, or performance or pay information that employees may not want shared, that should not be widely disseminated or that employers may not have traditionally relied on.

You should also consider the reliability of the results of decisions made by algorithm in an AI-powered tool. As noted above, such programs are only as good as the data they are trained on and the human engineers who create them. Thus, even if not unlawful, there exists the possibility of unfair results, results that cannot be satisfactorily unwound or explained, or even clear errors.

Surveillance

With respect to AI tools that monitor employees, you should counsel employers to consider the ethical issues raised by employee surveillance. Even if such monitoring does not violate any applicable law, which should be confirmed, it’s important to consider culturally how employees may feel about constant surveillance — and how that might translate into the workplace environment.

Employers striving to create an inclusive culture that fosters belonging may find such efforts stymied by tools that make employees feel their every move is being watched and that they are not trusted.