AI In Hiring: Interview room that never had a door, HR News, ETHRWorld

This post was originally published on this site.

AI In Hiring: Interview room that never had a door, HR News, ETHRWorld

While AI promises efficiency and neutrality in hiring, real-world applications have shown that it often reinforces systemic biases rather than eliminating them.

It’s a free Content, simply login/signup to unlock

Get in-depth Industry Insights and Analysis through our “Exclusive” content, presented to you by our esteemed panel of writers, for free

By continuing, you agree to the Terms & Conditions and acknowledge our Privacy Policy. This same account can be used across all Economic Times B2B portals.

  • Updated On Feb 16, 2025 at 01:48 PM IST

A study in 2024 found that nearly 40% of AI-driven job rejections showed signs of algorithmic bias, disproportionately affecting women and marginalized groups.

The screen flickers to life as the AI hiring tool begins its evaluation. A hopeful candidate, Mira, sits in front of her laptop, answering the pre-set questions with confidence and clarity. Yet, behind the algorithm’s seemingly impartial facade, something hidden is at work—a history embedded in lines of machine logic. As the AI scans her resume, it highlights specific terms: “Women’s Leadership Programme,” “Diversity Initiative,” and “Maternity Leave Coordinator.” Unbeknownst to Mira, her application is already facing an uphill battle. She is not alone in this struggle.In 2018, Amazon quietly scrapped its AI recruitment tool after discovering it was penalizing resumes that included the word “women’s” while favouring language more typical in male-dominated fields. A machine, intended to be unbiased, had absorbed the prejudices of its predecessors. Despite widespread criticism, AI-driven hiring tools continued to spread. Companies, eager to streamline their recruitment processes, embraced automation.

continued below

Now, in 2025, Hyderabad introduces its own AI-powered hiring system, claiming to provide a fair and efficient approach. But can we truly trust a technology that has already stumbled so significantly?

Mira never received a response. The AI, designed to select “optimal” candidates, has made its choice. But the lingering question remains—was it genuinely a decision, or merely a reflection of the existing hiring biases? As Hyderabad’s AI hiring tool integrates into the corporate landscape, we find ourselves at a crucial juncture.

Will AI assist HR in creating a more inclusive workforce, or will it become a quiet enforcer of systemic bias? Can we honestly claim to be making progress when technology merely accelerates human prejudice?

The United Nations’ Sustainable Development Goal 8 (SDG 8) advocates for decent work, equal pay, and inclusive economic growth. Although AI in HR aims to support these principles by reducing human bias, the situation seems to be quite different.

In 2021, the US Equal Employment Opportunity Commission (EEOC) warned that AI-driven hiring could violate anti-discrimination laws if not adequately monitored (EEOC Report, 2021). In 2023, the European Union implemented new regulations to ensure that AI hiring tools are transparent and accountable (European Commission, AI Act). A study in 2024 found that nearly 40% of AI-driven job rejections showed signs of algorithmic bias, disproportionately affecting women and marginalized groups (Harvard Business Review, 2024).

While AI speeds up recruitment processes, the challenge is to maintain ethical oversight. Key concerns include:

  • Transparency – Are candidates informed when AI is used to evaluate them?
  • Accountability – If an AI wrongly rejects a qualified candidate, who is responsible?
  • Bias Mitigation – Are hiring models regularly checked for fairness?

The AI hiring tool in Hyderabad represents a technological leap, but it also presents a dilemma. Can companies strike a balance between efficiency and fairness? AI itself is not inherently biased; it learns from human data, which can often be flawed. Without strong regulations and ethical AI practices, automating recruitment could reinforce, rather than resolve, hiring inequalities. As businesses implement AI hiring tools, they must ask: Are we building a future of equitable employment, or simply digitizing discrimination?

As a researcher in Sustainable Development Goals (SDG 8 and 10) and an Assistant Professor at Christ University, Lavasa, Pune, I have explored the evolving role of AI in hiring. My recent study, Spotlighting Recruitments: Is AI Dominating Human Resource Practices? Qualitative Research Using NVIVO, published in the Quality & Quantity journal by Springer, reveals a stark reality:

“AI isn’t just a tool; it’s a mirror reflecting our biases. AI-driven recruitment was envisioned to make hiring fairer, faster, and free from prejudice. Yet, without ethical safeguards, it risks becoming an enforcer of systemic inequities rather than an equalizer. The real question is no longer about AI’s efficiency but its integrity. Will we allow AI to revolutionize hiring for the better, or will we let it quietly perpetuate the very biases we sought to eliminate? Because the true danger isn’t AI itself—it’s who designs it, who trains it, and ultimately, who controls it.”

DISCLAIMER: The views expressed are solely of the author and ETHRWorld does not necessarily subscribe to it. ETHRWorld will not be responsible for any damage caused to any person or organisation directly or indirectly.

  • Published On Feb 16, 2025 at 01:48 PM IST

Most Read in AI in HR

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETHRWorld App

  • Get Realtime updates
  • Save your favourite articles