Businesses are incorporating artificial intelligence across several human resources stacks: tools that rank resumes, match candidates with open jobs, conduct initial interviews, generate interview questions, summarize interviews and produce interview scores and red flag reports for HR teams.
With the introduction of new AI regulations in Canada and the United States, such as Ontario’s Working for Workers Four Act, 2024, which came into force 1 Jan. 2026, and the California Consumer Privacy Act’s automated decision-making technology regulations, businesses are asking: is the use of AI for HR purposes allowed and — if so — what, if any, disclosure obligations apply, how can the risk of discrimination be mitigated, what privacy obligations apply to the collection, use and disclosure of personal information and what governance policies can further mitigate HR risks when using AI tools?
A threshold question in Canada: Is your business or organization federally regulated?
In Canada, which laws and regulations may govern use of AI tools for HR purposes will generally depend on whether a business or organization falls under federal or provincial jurisdiction. Employers subject to federal oversight — such as banks, telecommunications companies, airports or other employers that are not entirely regulated at the provincial level — may be required to comply with existing federal legislation addressing discrimination, workforce equity, pay equity and accessibility, such as, respectively, the Canadian Human Rights Act, the Employment Equity Act, the Pay Equity Act and the Accessible Canada Act.
They may also be subject to federal privacy laws, such as the Personal Information Protection and Electronic Documents Act in the case of private sector businesses or organizations or the Privacy Act in the case of Canadian federal agencies and institutions.
Provincial employers, on the other hand, may need to comply with their province’s employment standards, human rights and accessibility legislation. In the case of Ontario, this includes the Employment Standards Act, 2000, the Human Rights Code and the Accessibility for Ontarians with Disabilities Act, 2005, as well as, more recently, the Working for Workers Four Act, 2024, which amended the Employment Standards Act 1 Jan. 2026 to introduce specific AI disclosure requirements for job postings in Ontario.
They may also need to comply with provincial private sector privacy laws in Alberta, British Columbia, and Quebec and freedom of information laws that govern the collection of personal information by public institutions.
None of these instruments — federally or provincially — outright prohibit employers from using AI for HR purposes; however, they may regulate how AI tools can be used and their related governance frameworks.
Ontario’s new AI disclosure requirements in job postings
Effective 1 Jan. 2026, under the Working for Workers Four Act, 2024 Ontario employers with 25 or more employees who publicly advertise a job posting and use AI to “screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence.” This requirement does not prohibit Ontario employers from using AI; rather, it is a “tell people what’s happening” rule, which, in practice, means employers subject to it must disclose their use of AI in job postings.
Ontario is currently the only province in Canada that requires employers to expressly communicate their use of AI in job postings through AI-specific legislation, and Canada does not have a single, generally applicable stand-alone AI law governing private sector hiring under federal laws.
However, other areas of law are relevant to employers. Human rights: Ensuring AI tools are not discriminating against applicants on prohibited grounds of discrimination, such as race, sex, gender, ethnicity, creed or sexual orientation. Privacy laws: Ensuring necessary consent is being obtained from job candidates and employees for the collection, use and disclosure of their personal information through the use of AI tools, and contractual protections are imposed on service providers’ processing information.
Accessibility requirements: Ensuring AI-enabled hiring processes, like AI-led interviews, are accessible to individuals with disabilities, such as with hearing or visual impairments.
If you are a public sector employer in Ontario, the province has also introduced AI oversight through Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024, which sets out new expectations for public sector institutions, such as ministries, universities, school boards, children’s aid societies and similar institutions around the use of AI tools, internal governance, transparency requirements and prescribed technical standards for both cybersecurity and AI governance.
Using AI for HR purposes in the US
There is no current comprehensive federal AI statute governing the use of AI in the employment context in the U.S. On 20 March 2026, the White House published a “National AI Legislative Framework” outlining policy recommendations for Congress to develop a unified federal approach to AI legislation and regulation. The framework does not impose new obligations on employers, nor does it include draft legislation or an executive order directing federal agencies. Instead, it sets out legislative recommendations for Congress, reflecting the administration’s vision for a comprehensive federal AI statute.
Unless and until Congress enacts federal legislation with preemptive effect, state and local AI laws remain in force. A growing number of state and local jurisdictions — including California, Illinois, New York City and Texas — already explicitly regulate how employers use AI in hiring, promotion, performance management and other employment decisions. These statutes typically seek to regulate AI tools that facilitate material employment decisions such as screening resumes, ranking candidates, assessing “fit,” conducting interviews or scoring applicants. Whether a tool triggers additional legal obligations often turns on how much weight the employer gives to the AI output and whether a human retains the ultimate decision-making authority.
However, the exact threshold for human involvement varies by statute. Multiple jurisdictions now also require providing advance notice to applicants or employees when AI is used in covered employment decisions and certain access rights.
Class-action litigation challenging employers’ use of AI tools in employment under existing antidiscrimination and background check legislation, such as the Fair Credit Reporting Act, has been on the rise. While some states have amended their omnibus antidiscrimination statutes to explicitly prohibit discriminatory AI use — such as California, Illinois and New Jersey — federal and state anti-discrimination laws — such as Title VII of the Civil Rights Act, Americans with Disabilities Act and state civil rights statutes — apply fully to AI-driven decisions.
Some jurisdictions also explicitly require bias audits if the AI tool is an automated employment-decision tool, such as New York City’s Local Law 144, while others strongly incentivize them through enforcement posture. Even where not mandated, documented bias testing and risk assessments are critical mitigation tools if AI-driven decisions are challenged as discriminatory.
AI tools also often process sensitive personal data and trigger state privacy obligations alongside employment laws. Risk assessment obligations are in effect since 1 Jan. under CCPA regulations for any processing of sensitive personal information and, separately, if ADMT is used for a significant decision such as the provision or denial of employment, or if automated processing is used to infer characteristics or information about a job applicant, employee or independent contractor.
Under the new CCPA regulations on ADMT, employers must also determine if any use of AI in HR amounts to regulated ADMT or if there is sufficient human involvement in each use case. Regulated ADMT triggers beyond privacy notice at collection a new “pre use” notice and an access right. Opt-out rights should have limited relevance under the CCPA ADMT regulations for AI in HR, because not unlawfully discriminating in hiring processes is a basis under the regulations for not offering opt outs.
Best practices to mitigate risks
As can be seen, there is no unified global framework governing AI in HR. Outside of the U.S. and Canada, the European Union has taken the most comprehensive regulatory approach to AI. Under the EU AI Act, many AI systems used in employment — such as those involved in recruitment, performance evaluation, promotion, termination and workforce monitoring — are classified as “high‑risk.”
This designation triggers extensive obligations, including: risk assessments and mitigation measures; data governance and bias controls; human oversight requirements; transparency toward affected individuals; record‑keeping and post‑deployment monitoring.
Although employers must navigate a growing patchwork of jurisdiction‑specific laws and regulatory guidance, these regimes increasingly converge around a common set of principles. Regulators are consistently focused on transparency in the use of AI, preventing discrimination and biased outcomes, preserving human oversight over automated decision‑making and accountability for outcomes rather than intent alone. In addition, there is a clear expectation employers will maintain enhanced internal governance structures and documentation to support responsible AI use.
Against this backdrop, employers can meaningfully reduce legal and compliance risk by adopting a set of core best practices that cut across jurisdictions.
A critical first step is to inventory and classify AI use cases across the employment life cycle. Employers should maintain a clear and up‑to‑date record of AI tools used in areas such as recruiting, performance management, employee monitoring and workforce planning. Each use case should be evaluated considering its legal risk profile, jurisdictional exposure and whether it may qualify as “high risk” under applicable laws.
Before deploying any AI tool, employers should also conduct pre‑deployment risk assessments. This assessment should consider the potential for bias or disparate impact, the nature and quality of the data sources and training data used by the system and any limitations around explainability or transparency. Employers should further evaluate whether human review of AI‑assisted decisions is meaningful in practice or merely formal, as superficial oversight is unlikely to satisfy regulatory expectations.
Preserving genuine human oversight in employment decision-making is another cornerstone of compliant AI use. AI systems should be used to inform decisions, not replace them. Employers should ensure that human decision-makers retain authority to override automated outputs and “own” all final decisions. Those decision-makers must understand the tool’s capabilities and limitations, and there should be documented escalation and review mechanisms in place. Increasingly, regulators view a true “human in the loop” not as a design preference, but as a legal safeguard.
Vendor management also plays a central role in risk mitigation. Employers remain responsible for employment outcomes even when AI tools are provided by third parties. As a result, vendor due diligence and contracting should address issues such as bias testing and audit rights, data protection and security obligations, transparency and documentation support, and the allocation of liability. Agreements should also require vendor cooperation in the event of regulatory inquiries. Reliance on vendor assurances alone, without verification, is unlikely to withstand regulatory scrutiny.
Effective AI governance further requires alignment across internal policies, notices and training. Employers should review and update employee handbooks, codes of conduct, privacy notices, candidate disclosures and internal AI governance policies to ensure consistency with actual AI practices. Training is equally important. Human resources professionals, legal teams and business users must understand how AI tools operate, where legal risks may arise, and when potential issues should be escalated.
Finally, employers should recognize that AI compliance is not a one‑time exercise. AI systems evolve over time, and so do the legal frameworks governing their use. Ongoing monitoring is essential to detect model drift, emerging bias or changes in applicable legal requirements. A system that was compliant at launch may no longer be compliant six or 12 months later, underscoring the need for continuous reassessment.