Top 5 Things Employers Need to Know About the EU’s Latest AI Guidance | Fisher Phillips

This post was originally published on this site.

The European Union just provided much-needed clarity on what’s prohibited under its landmark EU AI Act, releasing guidance earlier this week on how businesses can comply with the law right as it begins to take effect across the continent. For employers using AI-driven tools in hiring, employee monitoring, and workforce management, this February 4 guidance offers a key reminder about your significant compliance responsibilities – and the steep penalties you might face if you commit missteps. Here’s a list of the top five things you need to know about this new development.

A Quick Refresher on the EU AI Act

Before we dive into the new guidance, here’s a quick primer about the EU AI Act. Passed in March 2024, it’s the world’s first comprehensive framework regulating artificial intelligence – and it has significant implications for employers operating in the EU. It classifies AI systems by risk and imposes strict restrictions on high-risk applications (which include most employment-related activities like hiring, HR, and worker management), while requiring transparency and accountability for others. Its requirements unfold in stages through 2026, with the first deadline having just taken effect this past Sunday. Companies operating in or serving the EU must comply or risk fines of up to 7% of global revenue. You can read our full summary of the EU AI Act here.

1. New Guidelines Clarify What’s Prohibited

The EU Commission’s February 4 non-binding guidance details AI practices that are explicitly banned under the Act. These include:

  • Emotion Recognition in the Workplace: Employers cannot use AI to analyze employees’ emotions via webcams, voice analysis, or other tools.
  • AI-Powered Social Scoring: AI cannot assess employees or job candidates based on unrelated personal characteristics like socio-economic status.
  • Predictive Policing Tools: AI systems cannot be used to assess a worker’s potential risk of committing misconduct based on biometric data.
  • Dark Pattern AI: AI can’t be used to manipulate employees into actions they wouldn’t otherwise take.

2. Employers Could Be Liable for How They Use AI – Even if They Didn’t Build It

One key clarification in the guidance is that employers that deploy AI systems remain responsible for ensuring compliance – even if they didn’t develop the technology themselves. Companies cannot simply rely on AI vendors’ assurances, and instead must conduct due diligence to prevent misuse of AI in employment decisions. Businesses should follow our guide about the essential questions to ask your AI vendors before deploying new systems to minimize your legal risk.

3. Enforcement Has Just Begun, With Fines Starting in August

While the EU AI Act won’t be fully online and effective until August 2026, enforcement has already begun as the first compliance deadline just passed on February 2. The guidance reminds businesses that one of your next key deadlines arrives in August 2025 – when companies must designate compliance officers and conduct AI audits. The guidance notes that those violating the prohibited practices requirement could face fines ranging from 1.5% to 7% of their global revenue.

4. Expect More Scrutiny on AI in Hiring and Workplace Surveillance

Regulators made clear in the guidance that AI-driven hiring assessments and employee monitoring tools will be under a microscope across the EU. Employers that operate in the EU must ensure transparency in how AI is used, offer alternatives where necessary, and avoid AI systems that could disproportionately impact protected groups.

5. The EU’s Approach Could Set a Global Standard

While the EU’s AI Act is far stricter than regulations in the U.S. and other jurisdictions, its influence is expected to reach beyond Europe. We don’t expect the U.S. federal government to take any steps towards AI regulation given its new light-touch stance under the Trump administration, but states across the country might look to Europe for guidance as they contemplate regulation. And of course, multinational employers will need to harmonize compliance efforts across multiple regions, especially as other governments consider and deploy similar AI oversight.

What Employers Should Do Next

âś… Conduct an AI Audit: Identify any AI systems used in hiring, employee management, and monitoring so you can be in the best position to restrict or adjust them to comply with unfolding regulations.

✅ Create, update, and clearly communicate your company’s AI policy – this is one of the key steps in any AI Governance program you implement.

✅ Review AI Vendor Agreements: Ensure AI tools comply with EU regulations and include appropriate safeguards – and follow our guidance when negotiating with AI vendors before deploying their systems at your business.

âś… Train HR and Compliance Teams: Educate personnel on the risks and requirements associated with AI-driven employment decisions.

✅ Monitor Future Updates: As enforcement ramps up, additional guidance and case law will shape how these rules apply in practice. Make sure you are subscribed to Fisher Phillips’ Insight System to stay on top of the key developments.

Conclusion

We will continue to provide the most up-to-date information on AI-related developments, so make sure you are subscribed to Fisher Phillips’ Insight System. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our AI, Data, and Analytics Practice Group or International Practice Group.