This post was originally published on this site.
While AI-driven systems hold the potential to streamline hiring processes, the issue of hiring discrimination has emerged as a pressing global concern as AI-automated recruitment tools gain widespread adoption. For instance, in August 2023, the US Equal Employment Opportunity Commission (EEOC) reached a landmark settlement with iTutorGroup, a Chinese education technology company, marking the first US case to address AI-driven hiring bias with a foreign company. iTutorGroup was accused of rejecting over 200 candidates solely based on age, a protected status in the US, highlighting the serious ethical risks that AI-driven hiring processes can pose.
As automated tools for job posting, resume screening, and video interviews become more prevalent worldwide, they increasingly influence employment opportunities, often affecting marginalized groups such as women, ethnic minorities, and individuals with disabilities. Addressing bias in these systems demands a collaborative, cross-border effort to design and deploy ethical frameworks, regulatory priorities, and technological innovations to establish a global standard.
Comparative Analysis of AI-Bias Research: US, EU, and China
To better understand the scope of this challenge, we conducted a comprehensive and focused literature review, performing a systematic search of both Chinese-language and English-language academic papers. After collecting 265 relevant papers, we identified the trends, gaps, and overlaps across Chinese-language and English-language studies from the US, EU, and China in the domain of AI hiring bias. We found that researchers from the three research communities approach AI hiring bias research from different perspectives, largely shaped by distinct AI ethics normative frames, regulatory rationale, and policy priorities.
As for the topic of discrimination, Chinese research focuses predominantly on discrimination related to gender, age, and disability. In contrast, US and EU researchers extend their focus to include racial discrimination. Regarding AI tools examined in their research, Chinese research highlights gig platforms, emphasizing the ethical concerns tied to the nation’s expansive gig economy and its unique labor dynamics. Meanwhile, research in the US and EU demonstrates a stronger focus on cutting-edge technologies like hiring with LLM (Large Language Model) tools, showcasing their prioritization of innovation in recruitment systems and advanced AI applications. Despite these regional differences, all three communities converge in their interest in general AI HRM (Human Resource Management) and AI video interviews, indicating shared global concerns about these hiring tools.
Among the Chinese-written research papers, researchers prioritized the development of comprehensive nationwide policies and legal frameworks, which reflect China’s state-controlled governance model. In an apparent response to the central government’s directive to strengthen AI ethics regulation, in the papers we reviewed Chinese researchers prioritized the development of comprehensive nationwide policies and legal frameworks to tackle the AI hiring bias issues. Notably, all 43 papers written in Chinese on AI hiring bias focus exclusively on legal analysis or public policy research, reflecting a centralized and formalized approach to China’s efforts in this domain. For instance, while China’s Labor Law and Personal Information Protection Law include general anti-discrimination provisions, these guidelines do not explicitly address AI-driven hiring processes. As a result, in the papers we reviewed, legal scholars concentrated on proposing improvements to these laws within their existing frameworks. Meanwhile, sociology researchers focused on offering strategic recommendations for governance models tailored to different levels of government, aiming to bridge the gap between policy design and practical implementation.
Among the 91 papers authored exclusively by researchers affiliated with institutions in the US, 21 (23%) focused on law and public policy. Their research in policy-making centers on the application and implications of key legislation like the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Equal Employment Opportunity Commission (EEOC) guidelines. Among the 55 papers authored by researchers affiliated entirely with EU countries, 21 (38%) examined law and public policy, reflecting a relatively stronger emphasis on these areas within the EU research community compared to the US. The EU researchers’ recommendations on policy arose from the EU’s approach towards establishing a centralized regulation body. Most of the EU’s policy or legal research on AI hiring bias we reviewed centers around the General Data Protection Regulation (GDPR). Building on the GDPR’s foundation, the Artificial Intelligence Act (AI Act), proposed in 2024, represents significant progress in EU regulatory efforts. This landmark legislation establishes a risk-based framework for AI systems, explicitly addressing the unique challenges posed by their deployment in high-stakes domains like recruitment. Under the AI Act, AI systems used in hiring are classified as “high-risk,” mandating strict adherence to rigorous standards of transparency, accountability, and non-discrimination.
In addition, as job seekers are the groups most directly affected by AI hiring bias, research on their experiences is critical for understanding and addressing the real-world implications of AI systems. 12 out of 55 papers (22%) authored by EU-based-only researchers include investigations into their experiences, reflecting a moderate focus on understanding the human impact of AI-driven hiring processes. With a lower emphasis, 14 out of 91 papers (15%) focused on job seekers were authored by US-based-only researchers. Chinese research, however, does not include any empirical studies focused on job seekers, highlighting a significant gap in exploring the direct impact of AI hiring bias on individuals. Despite these efforts, the overall volume of research addressing job seekers’ experiences remains insufficient.
In addition to studying the content of the papers within our dataset, we also studied authorship patterns, especially those related to international collaborations. Analyzing the number of collaborative research studies shows distinct patterns for the US, EU, and China. Despite the lack of domestic regulation on AI ethics, the US is the most active in leading international collaboration, accounting for 38.7% of total partnerships in our corpus. Its collaborations span both high-income economies, such as EU member states, and emerging markets like China and India. The EU, with 23.6% of the total collaborations, demonstrates a strong intra-regional collaboration while also maintaining significant cross-regional collaboration with countries in Asia-Pacific, such as Singapore and Australia. China, contributing to 4.1% of total collaboration, demonstrates a limited scope for collaboration compared to the other two AI communities studied. Their focus is primarily on partnerships within the Asia-Pacific region. Collaboration between the three largest AI communities on AI policy-making remains limited. For example, the US and China have collaborated on three studies, all of which focus on enterprise management topics. Similarly, researchers in the US and the EU collaborated on only two studies in our consideration set, also centered on management and business. Notably, there is no collaboration between researchers from China and the EU in our corpus, although political trust between China and EU countries surpasses that between China and the US. Strengthening the connections between the three could significantly enhance global AI development by addressing shared challenges and fostering mutual trust.
Conclusion: Call for Multilateral Collaboration
In 2021, UNESCO’s adoption of the Recommendation on the Ethics of AI by 194 member states marked a significant milestone in establishing a global framework for AI ethics, emphasizing fairness, inclusivity, and accountability. Despite this progress, effective multilateral policymaking on AI remains incomplete, hindered by persistent mistrust between major AI powers like China and the US. This lack of collaboration poses serious risks, including the fragmentation of ethical standards, which creates regulatory inconsistencies, complicates compliance for international companies, and stifles innovation through duplicated efforts. Worse, it could trigger an “AI Ethics Arms Race,” where ethical frameworks become tools for geopolitical influence, escalating conflicts and undermining trust. Such fragmentation threatens the protection of human rights and privacy, particularly in areas like data security, bias mitigation, and surveillance, eroding confidence in AI technologies and fostering inequality. Without a unified global approach, AI’s potential to address transnational challenges and maintain public trust in critical sectors remains critically compromised.
​​The urgency for multilateral collaboration in AI ethics cannot be overstated, as nations must move beyond competition and embrace their shared responsibility to develop robust ethical frameworks. Crucially, cooperation does not require complete alignment in ethical or political philosophies, and pragmatic collaborations can still flourish amidst tensions. Addressing shared issues, such as the bias in AI-driven hiring processes, offers a practical starting point.
Beyond governments, multilateral efforts should include corporations and academia, combining diverse expertise for a holistic and innovative approach to AI ethics. Initiatives like the International Telecommunication Union’s (ITU) “AI for Good” conference and programs demonstrate the potential of inclusive partnerships by uniting policymakers, industry leaders, and researchers to tackle global challenges such as bias mitigation and sustainable development while upholding ethical principles. These efforts not only elevate ethical standards but also foster mutual trust and shared values in advanced technology development.
As AI ethics initiatives evolve, we must consider whether they will act as bridges fostering global cooperation or as dividing lines deepening geopolitical tensions, with this choice inevitably shaping the trajectory of AI development in the years to come.