AI and Workplace Discrimination: What Employers Need to Know After the EEOC and DOL Rollbacks

This post was originally published on this site.

Recent developments in federal AI policy, including the effective recission of Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL) guidance on AI and workplace discrimination, have raised questions for employers. The rollback follows President Trump’s recent executive order aimed at reducing government oversight of AI and promoting U.S. leadership in the field.

Despite these changes, employers must remain vigilant. The elimination of government guidance does not alter fundamental anti-discrimination laws such as Title VII of the Civil Rights Act and the Americans with Disabilities Act (ADA) or their state counterparts. Employees and job applicants can still bring lawsuits over an employer’s use of AI (and have done so), and state and local governments are stepping in with their own AI regulations.

This legal update outlines key takeaways and best practices for employers navigating the evolving landscape of AI in employment.

  1. Federal anti-discrimination laws still apply to AI tools

Although the EEOC and DOL have withdrawn their AI guidance or identified such guidance as potentially “out of date,” existing federal employment laws remain fully applicable to AI-driven hiring and workplace decision-making. Employers remain liable for:

  • Disparate impact discrimination: AI tools that disproportionately exclude or disadvantage protected groups may violate Title VII, even if the bias is unintentional.
  • Disability discrimination: AI systems that screen out candidates on disability-related characteristics may trigger liability under the ADA. Employers are responsible for ensuring reasonable accommodations when utilizing AI.
  • Vendor liability: Employers can still be held responsible for AI-related discrimination, even if the tool was developed and implemented by a third-party vendor.

AI systems must be monitored regularly to ensure compliance with anti-discrimination laws. Employers utilizing AI systems should therefore conduct internal audits of those systems, require vendors to provide transparency into their AI algorithms, and carefully review any AI-liability provisions in their vendor agreements.

  1. The risk of increased state and local AI regulation

As federal agencies step back, state and local governments are moving forward with AI regulations. Key developments include:

  • Colorado AI Act (effective February 1, 2026): Regulates the use of AI systems that make or are a substantial factor in making “consequential decisions” in areas such as employment. The law will require AI deployers (i.e., employers using AI) to use reasonable care to avoid algorithmic discrimination, implement risk management policies, complete annual impact assessments, provide notice when certain AI systems are used, and provide employees an opportunity to appeal adverse consequential decisions resulting from AI (among other requirements).
  • Illinois HB 3773 (effective January 1, 2026): Prohibits employers from using AI in a way that results in employee discrimination on the basis of protected classes under the Illinois Human Rights Act. Requires employers to notify workers when AI is used with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment. The law further prohibits AI systems that use zip codes as a proxy for protected classes.
  • Illinois AI Video Interview Act (went into effect January 2020): Governs the use of AI to analyze recorded video interviews of job applicants by requiring disclosure, consent, deletion rights, and government agency reporting.
  • New York City Local Law 144 (went into effect July 2023): Regulates the use of automated employment decision tools for hiring or promotion decisions by requiring employers to provide advance notice of such tools, conduct independent bias audits annually, and publish the results of such audits.

Employers should track and comply with state and local AI laws. Multi-state employers must be prepared for a patchwork of AI regulations governing hiring, promotion, and termination decisions. Less than two months into 2025, state legislatures have already introduced 27 bills that would specifically regulate the use of AI in the employment setting.

  1. AI best practices for employers moving forward

Even in the absence of federal guidance, employers should proactively address AI-related discrimination risks. Consider these best practices:

  • Conduct AI audits: Regularly evaluate AI tools for potential biases against protected classes and document results.
  • Review vendor agreements: Ensure AI vendors provide transparency regarding how their algorithms function, confirm compliance with anti-discrimination laws, and carefully scrutinize warranty, disclaimer, and indemnity provisions.
  • Implement human oversight: AI should be used as a tool—not a sole or substantial decision-maker—for hiring, promotions, and terminations. Human review is crucial and internal policy controls should be implemented to ensure appropriate human involvement.
  • Provide employee transparency: Inform employees and job candidates when AI is used in significant decision-making processes, and offer alternatives when appropriate or legally necessary as an accommodation.
  • Monitor legal developments: Stay ahead of evolving federal and state AI regulations. Proactively adapting to new legal requirements can mitigate risk.

What this means to you

While the recission of federal AI guidance removes certain government-endorsed best practices, it does not alter the fundamental legal risks associated with AI in employment.

Employers must ensure that AI-driven decision-making complies with anti-discrimination laws and proactively address bias concerns.

Contact us

For more information or to discuss how these changes may impact your organization, please contact our employment law team.