The use of artificial intelligence in hiring and recruiting has grown steadily and is now ubiquitous. Reports indicate that approximately 90% of employers use AI to review and evaluate resumes, and more than 30% use AI to automate candidate searches and customize job postings.
The legal landscape governing AI in hiring, however, has been far less stable—much like many other areas of employment law in recent years. Following what has become a well-established trend, the past year has brought federal deregulation and state efforts to fill the void. This article provides an overview of the new landscape and practical guidance for employers looking to stay ahead of the curve.
The federal retreat
Under the Biden administration, the Equal Employment Opportunity Commission announced in 2021 an “Initiative on Artificial Intelligence and Algorithmic Fairness,” promising to scrutinize the impact and fairness of AI and algorithmic decision-making in employment decisions. EEOC investigations and enforcement actions followed. President Biden signed an executive order establishing a framework for responsible AI development, and the EEOC issued guidance for employers on AI use, focusing on the potential disparate impact AI may have on protected groups.
That approach shifted dramatically on April 23, 2025, when President Trump signed an executive order disavowing federal reliance on disparate impact liability “in all contexts to the maximum degree possible.” Executive Order 14281, “Restoring Equality of Opportunity and Meritocracy,” directs federal agencies like the EEOC to deprioritize enforcement of statutes and regulations creating disparate impact liability. The Biden-era EEOC guidance for employers using AI no longer exists.
The Trump administration went further on December 11, 2025, signing an executive order establishing a federal policy to limit AI regulation while prioritizing “the United States’ global AI dominance.” Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” announces the federal government’s goal of a uniform, minimally burdensome AI policy—and its intent to challenge state AI regulations that conflict with that goal.
State regulations accelerate
As federal enforcement recedes, states have moved to expand existing employee-friendly anti-discrimination laws to expressly address AI in employment decisions.
In California, the Civil Rights Council enacted regulations in October 2025 making it unlawful to use automated decision systems or selection criteria that discriminate against an applicant or employee—“or a class of applicants or employees”—based on a protected characteristic under the California Fair Employment and Housing Act.
In New Jersey, the Division on Civil Rights added a chapter to the state administrative code in December 2025 to implement the New Jersey Law Against Discrimination “as it pertains specifically to disparate impact liability.” Under these regulations, employers may face liability for “algorithmic discrimination” even when relying on third-party developers or using AI tools without discriminatory intent. The regulations prohibit the use of AI tools in recruiting, screening, hiring, and other employment practices if the AI tools have a disparate impact on applicants and employees based on their protected characteristics. The regulations provide multiple examples of potential algorithmic discrimination, including AI tools that use the company’s current employee population as a baseline for candidate searches or that screen candidates based on availability.
In Illinois, the Human Rights Act itself—not implementing regulations, as in California and New Jersey—was amended effective January 1, 2026, to codify as a civil rights violation the use of AI in changing the terms or conditions of employment if doing so subjects a person to discrimination based on a protected characteristic. The Act also prohibits using zip codes as a proxy for protected characteristics.
Connecticut has taken an approach more in line with California and New Jersey for now. A February 25, 2026, memo from the state Attorney General’s Office invokes existing Connecticut laws as applicable to AI and as protective of state residents. The memo specifically cites Connecticut’s “strong antidiscrimination laws,” noting they “prohibit discrimination in a wide range of scenarios in which AI may be employed, including, but not limited to, in hiring and employment.”
Connecticut Senate Bill 435, however, directly addresses “Automated Decision Systems Protections for Employees.” The pending legislation would, among other things, impose significant disclosure and notice requirements, require bias audits by government-approved third parties, and mandate human review of automated employment-related decisions.
For employers, changes like these—particularly involving state laws with private rights of action, statutory damages, and attorneys’ fees—should command attention. Although federal preemption arguments may provide a defense, the ongoing state-federal conflict over AI regulation means heightened scrutiny and uncertainty for any company caught in the crosshairs.
Private enforcement actions continue to expand
As the regulatory landscape shifts, liability theories in private litigation are also evolving. While most lawsuits in this space allege discrimination based on a protected characteristic such as race or disability, a new lawsuit filed in January 2026 advances a different theory, background check law violations.
A nationally recognized plaintiffs’ employment class action firm filed a putative class action against a technology company focused on AI recruiting and “talent intelligence” earlier this year. The lawsuit alleges the defendant assembles public information about candidates from across the web to create a proprietary database, then sells prospective employers reports intended to help them evaluate candidates—in effect creating and selling “consumer reports” without complying with the laws designed to regulate them. It alleges violations of the federal Fair Credit Reporting Act, the California Investigative Consumer Reporting Agencies Act, and California’s Unfair Competition Law. This lawsuit represents an entirely new theory for challenging pre-employment AI tools and serves as another reminder that employers should take action to mitigate AI hiring risks.
Assessing and mitigating AI hiring risks
Forward-thinking companies will treat last year’s federal deregulation not as a green light for casual AI deployment, but as a mandate to implement or strengthen governance frameworks, bias mitigation measures, and compliance infrastructure. The following steps can help reduce risk:
-
Conduct vendor due diligence. Organizations that give legal a seat at the table before AI tools are procured—not after litigation is threatened—will come out ahead. HR and legal should work closely with procurement to ensure contracts with AI vendors include representations regarding bias testing and methodologies, indemnification provisions, and audit rights.
-
Build a multi-jurisdictional compliance framework. HR leadership and legal should map current AI tool usage against existing regulations, identifying compliance obligations and addressing gaps. This is not a one-time project; it requires regular review and updates as the regulatory landscape evolves.
-
Consider implementing bias audit programs. Even absent legal mandates, conducting proactive bias audits can provide valuable compliance evidence and litigation defenses. HR, legal, and AI vendors should coordinate to establish regular audit cadences and maintain appropriate documentation.
-
Engage leadership on risks. The potential for legal exposure and reputational harm makes AI in hiring and recruitment a board-level issue. Ensure appropriate stakeholders understand the liability landscape and are aligned on company risk tolerance and compliance investment.
Conclusion
Federal deregulation should not lull companies into complacency regarding AI in recruitment and hiring. State regulation and private litigation continue to evolve. The organizations best positioned to navigate this dynamic environment will be those that invest in addressing it proactively—treating risk mitigation not as a burden, but as a strategic priority.