This post was originally published on this site.
In this episode, host Megan Monson talks with Amy C. Schwind from Lowenstein’s Executive Compensation, Employment & Benefits practice group about AI’s growing role in human resources processes and employment decisions. They discuss what is permitted under applicable laws, potential discriminatory impact, and recommendations for employers that are using AI for employment decisions.
Speakers:
Megan Monson, Partner, Executive Compensation and Employee Benefits
Amy C. Schwind, Counsel, Employment
Megan Monson: Welcome to the Lowenstein Sandler Podcast Series. Before we begin, please take a moment to subscribe to our podcast series at lowenstein.com/podcasts or find us on Amazon Music, Apple Podcasts, Audible, iHeartRadio, Spotify, SoundCloud or YouTube. Now, let’s take a listen.
Welcome to the latest episode of Just Compensation. I’m Megan Monson, a partner in Lowenstein Sandler’s Executive Compensation, Employment & Benefits Practice Group, and I’m joined by one of my colleagues today, Amy Schwind, who is counsel in the same practice group.
Amy Schwind: Hi. It’s a pleasure to be here today.
Megan Monson: Today’s discussion will focus on artificial intelligence in the workplace. It is becoming increasingly common for employers to use AI tools in their human resource processes, whether it’s in recruiting, vetting, or hiring decisions. While many organizations see the value and potential use for AI to help enhance HR processes, there are aspects to consider from a legal perspective. In this podcast episode, we’ll explore best practices and legal considerations for using AI in the workplace, and particularly, as a tool for recruiting and other employment decisions. As always, this is not intended to be an exhaustive discussion, so if you have questions related to particular circumstances in your workforce or regarding specific legal issues, we encourage you to consult with your legal counsel. Jumping right in, the topic of AI is prolific in the media today. What are we talking about, big picture, when we talk about using AI as a tool with respect to HR and employment decisions?
Amy Schwind: So, in the employment context, using AI has typically meant that the tool developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. Employers may rely on different types of software that incorporates algorithmic decision-making at several stages of the employment process. For example, there are resume scanners that prioritize applications using certain keywords. There’s employee monitoring software that rates employees on the basis of their keystrokes or other factors. There are virtual assistants or chatbots that ask job candidates about their qualifications and reject those who don’t meet predefined requirements. We’re also seeing emerging video interviewing software that evaluates candidates based on their facial expressions and speech patterns. Another example would be testing software that provides job fit scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived cultural fit based on their performance on a game or a more traditional test.
Megan Monson: So, it sounds like, similar to a lot of other technology, there’s a lot of benefits for it, but what are some risks associated with using AI in the employment context?
Amy Schwind: Sure. An overarching risk is that biased algorithms could create disparate impact on certain groups of candidates or employees, even without an employer’s intent to discriminate. At the end of the day, AI systems are only as unbiased as the data that they’re trained on, so if historical data used to train AI algorithms reflects patterns of discrimination, the AI system may replicate or amplify these biases in its decision-making process. For example, if an AI system is trained on data from a company that has historically favored younger candidates for entry-level roles, the system may inadvertently favor younger candidates over older when assessing future candidates.
Or, if the AI system is trained on data from a company that has historically favored males for leadership positions, the system may inadvertently favor males when assessing those candidates. This presents the potential for conflict with federal, state, and local anti-discrimination laws, like Title VII, the Age Discrimination and Employment Act, the Americans with Disabilities Act, and also state and local human rights laws. There are also issues presented if the AI tool is screening out candidates based on data, such as credit checks or background reports. Employers need to ensure compliance with the Fair Credit Reporting Act and also state and local criminal history check laws.
Megan Monson: One jurisdiction that seems like it was at the forefront of creating legislation in this regard is New York City, not unsurprisingly. Can you discuss NYC local law 144 and how it impacts employers?
Amy Schwind: Enforcement of this ordinance began on July 5th, 2023, and I do believe there has been actual enforcement. The ordinance prohibits employers and employment agencies from using an automated employment decision tool, what is termed an AEDT, to make an employment decision unless the employer conducts a bias audit on the tool, publishes an audit summary, and provides certain notice. An AEDT is defined as any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score classification or recommendation that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons, so that’s the technical definition.
The focus of the ordinance is on using an AEDT to screen candidates for hiring or employees per promotion by determining whether they should be selected or advanced in the process. What’s not covered generally are compensation, retention, and termination decisions. Also, the ordinance is only covering those candidates who have actually applied for a specific job. It doesn’t apply to tools used to identify potential candidates who have not yet applied for a position. Another thing to note is that the law applies to employers and employment agencies that use an AEDT “in the city,” and that has a specific meaning depending on where the job is located, and also if it is remote, it might be covered if the location is associated with a New York City office.
Megan Monson: So, if you are in New York City and want to use an AEDT tool, you mentioned needing a bias audit. What is that?
Amy Schwind: A bias audit means an impartial evaluation by an independent auditor. The bias audit and data requirements are very specific. The bias audit will calculate the impact of the AEDT on sex categories, race, ethnicity categories, and intersectional categories. Again, it’s really complicated, but it has to be done by an independent auditor, so they know the requirements. An employer can’t use an AEDT if more than one year has passed since the bias audit. There are then specific requirements about publication of the bias audit on the employer’s website, how long it has to be posted, and also required notice to candidates and employees before the AEDT use.
Megan Monson: Have other jurisdictions started to follow suit with this type of AI legislation?
Amy Schwind: Yeah. A lot of states have introduced bills relating to AI in the workplace, and others have already passed certain measures. Illinois, for example, has what’s called the Artificial Intelligence Video Interview Act. So, before AI is used to analyze a video interview, employers have to inform candidates that AI is being used to assess their suitability, how the AI works, and which characteristics it will be used to evaluate, and the employer has to obtain consent from the candidate prior to the interview.
Also, effective January 1st, 2026, Illinois employers will be required to notify employees when they use AI for employment decisions, which include recruitment, hiring, promotion, renewal of employment, selection for training, discharge, discipline, tenure, or the terms, privileges, and conditions of employment. So that is pretty broad. In Maryland, it also has a similar interviewing lot. Employers need consent to use facial recognition services during pre-employment interviews. Colorado, effective February 1st, 2026, to protect against what they call algorithmic discrimination, employers and tool developers have certain obligations when AI gets involved in the decision-making process, affecting personnel. There is also many proposed bills in places like New Jersey, New York, California, Maryland, and Massachusetts, to name a few.
Megan Monson: Have there been any action on the agency front to address the use of AI and employment?
Amy Schwind: There has. In 2021, the EEOC launched an initiative to examine use of AI. Then, in May 2022, they published their first technical assistance guidance, and that was dealing with AI and the Americans with Disabilities Act. They followed that in May 2023, publishing an additional technical assistance guidance document, focusing on Title VII and AI. In September 2023, they actually announced settlement of the agency’s first lawsuit involving the alleged discriminatory use of AI in the workplace. The EEOC had filed a complaint against a company that hires remote English tutors for students in China in the eastern District of New New York. The EEOC alleged that the company violated the Age Discrimination and Employment Act by implementing a software hiring program that discriminated against older applicants because of their age by automatically rejecting female applicants aged 55 or older, and male applicants aged 60 or older.
This was allegedly discovered when an applicant submitted two applications, identical and all, but birthdate. In connection with the settlement, the company agreed to pay $365,000 to a group of applicants. The Department of Labor has also focused on AI in the context of the FLSA. Reliance on automated timekeeping and monitoring systems without proper human oversight can, according to them, create potential compliance challenges under the FLSA. Also, in October 2024, the US Department of Labor published non-binding artificial intelligence and worker wellbeing principles and best practices for developers and employers. In October 2022, the National Labor Relations Board General Counsel also issued a memorandum warning employers about the use of electronic surveillance, automated management technologies, and the potential for infringement on employees protected activity.
Megan Monson: Do you think we’re likely to see changes in federal agency activity with the change in presidential administration?
Amy Schwind: I do. So, in October 2023, President Biden had issued an executive order calling for a coordinated US government approach to the development and use of AI, so it is likely that activity with respect to AI will be scaled back in certain ways. Following the change in the administration, the composition of the agencies will change, and it is likely that the subjects they focus on will also change.
Megan Monson: Bringing it all back together, what are some things to think about in terms of workplace AI policies?
Amy Schwind: Yeah, so a policy can serve as a guideline for an organization’s development, use, and monitoring of AI in the workplace. It’s definitely something to think about, particularly for organizations that are implementing AI. So, a policy can address organization-provided AI products and use of any third-party or publicly available tools, like ChatGPT is one of the most commonly known ones, Bard, Bing, MidJourney. Also, a policy can remind that employees are still bound by any existing confidentiality obligations. If you’re an employer in the legal field with attorneys, you’ll want to consider how this interacts with confidential client information. Also, employers should consider the aspect of potential data breach and protections for that. The idea that information provided to third-party AI platforms could be leaked. Also, the policy can address the human aspect. Information received from AI generally needs to still be reviewed by a human, whether it’s a human resources professional or otherwise. Also, it can address the accuracy aspect that results should still be checked for accuracy.
There are data, privacy, and security considerations. The policy can address data collection, storage, and sharing. If AI tools will be used to make or help make employment decisions, there should be transparency, so a policy can address that as well. Also, it can address commitment to adherence to anti-discrimination standards. There are also intellectual property aspects to consider. Employers should think about IP protection of work product created by AI and also avoiding infringement of third-party intellectual property rights. With the use of AI, a policy should also carve out for behavior protected by the National Labor Relations Act. It can address, and should address, consequences for violations. In implementing one of these types of policies, an employer should definitely consider input from various stakeholders, like legal, human resources, IT, and compliance and regulatory departments, if they exist. The employer can also consider incorporating guidelines for selection of AI vendors. It’s really important that systems are not biased, that they can be audited, and the policy can also address reasonable accommodations in recruiting, hiring, and other employment contexts with the use of AI.
Megan Monson: So, it sounds like really adopting a policy that governs use of AI in the workplace can be extremely helpful for employers who are going to utilize AI so that they’ve actually thought through all of these various aspects.
Amy Schwind: It can, for sure, and it should definitely be tailored to the specific employer.
Megan Monson: Well, thank you so much, Amy. This was a really useful discussion, highlighting some legal considerations and developments as the use of AI to make employment decisions become more commonplace. We encourage you to consult with counsel on specific questions regarding AI use in the employment context or AI workplace policies. Thank you for joining us today. We look forward to having you back for our next episode of just compensation.
Thank you for listening to today’s episode. Please subscribe to our podcast series at Lowenstein.com/podcast, or find us on Amazon Music, Apple Podcasts, Audible, iHeartRadio, Spotify, SoundCloud or YouTube. Lowenstein Sandler podcast series is presented by Lowenstein Sandler and cannot be copied or rebroadcast without consent. The information provided is intended for a general audience and is not legal advice or a substitute for the advice of counsel. Prior results do not guarantee a similar outcome. Content reflects the personal views and opinions of the participants. No attorney-client relationship is being created by this podcast, and all rights are reserved.