This post was originally published on this site.
AI has the potential to elevate productivity and economic growth and, if used responsibly, can help make work more meaningful for people. But weāre not always aware of AIās presence in the technology we use. Hayfa Mohdzaini examines the vital role of the HR profession in understanding the presence and potential of AI, and importantly, the ethical use of it at work.
A CIPD and HiBob survey of more than 800 UK company bosses found that 87% were aware that AI can be present in search engines, but only 64% were aware of its use in HR information systems. However, excitement around AI-generated content, along with HR vendors including generative AI features, is increasing awareness of the technologyās prevalence and igniting renewed debate around ethical AI adoption.
Potential risks of AI at work
We believe that work can ā and should ā benefit people and society as much as it benefits business and the economy. While AI has the potential to support good work, such as optimising routine tasks and freeing up time for more creative work, employers must also consider the potential risks. This includes not using AI and other technologies in ways that cause harm or go against company values.
Replacing workers with AI without proper support for people to upskill or transition to other employment can be deemed unethical. Although some AI-driven job displacement will be inevitable, technology should be used to complement human capability.
Often, the goal is to free up time for more rewarding work. However, the mix of tasks needs to be revisited when AI radically changes someoneās job; not everyone can do deep thinking or deal with difficult customer queries the whole day.
Would the employee be more productive if they got some time back for learning, volunteering or just rest? Itās important to involve employees in redesigning their role because not everyone has the same needs, and impacts that are negative to wellbeing and engagement are bad for the individual and the organisation.
The use of AI in recruitment should also be monitored carefully. Biases in the data that AI has been trained on could potentially lead to candidates being disadvantaged based on their protected characteristics, such as sex, age or race.
AI could also be used in ways that are intrusive and even unlawful. For example, collecting and analysing candidate social media profiles which might include personal photos and family information ā this could breach the GDPR data minimisation principle.
Ethical grey areas
Some AI uses fall into an ethical grey area where it is legal to do but might get pushback from employees.
One example is introducing a new measure for monitoring employee performance, such as email use or time spent on tasks.
While the intention may be to improve performance, safety or accountability, employees might feel their privacy is being invaded, or the data could be used to unfairly evaluate their performance without proper context.
In this case, telling employees is not enough: employers need to have a two-way conversation with those affected to make sure the measures are relevant and necessary to improve business outcomes.
HRās role in adopting AI ethically
An organisationās ability to effectively embrace the benefits of AI depends on having experts in people, work and change involved in decisions about technology implementation and how these are communicated to employees. HR professionals will be able to consider the immediate impact while taking a longer-term view of how to respond to the impact on peopleās jobs, as well as establishing guardrails for how AI is used.
HR should help develop a culture that supports responsible AI use in a way that aligns with the organisationās values. This includes offering upskilling and reskilling opportunities and providing a safe space for employees to learn from each other.
If it becomes clear that jobs will likely be replaced, think about opportunities to retrain and redeploy workers within the organisation before considering redundancies.
Consultation with employees before, during and after any change to the way they work is vital, especially for assessing impact in a meaningful way.
Is an AI ācode of ethicsā necessary?
HR professionals should provide guidelines to help people make the right decisions about how and when AI can be used. These could set out examples of acceptable and unacceptable use, the potential consequences of getting things wrong, and who individuals should contact for advice if they are unsure.
Addressing the risks AI presents may already be covered by existing company policies, but a specific AI policy or ācode of ethicsā may provide clarity and help avoid wrong assumptions being made.
This could include:
- An explanation of why it is needed, covering how employees are expected to behave and a recognition that AI use should be grounded in the organisationās culture and values
- An outline of responsible use, being clear that AI is a tool and not a replacement for human decision-making
- The need for transparency about where and how AI is being used
- A recognition of the bias that might exist in AI systems and the measures to mitigate against it
- Which systems can safely be used and the data protection and information security measures the organisation is taking
- How to raise concerns about AI use
The CIPD has published guidance on how to create an AI use policy and choosing the right technology for your organisation.
Spelling out the dos and donāts in an AI code of ethics makes clear what organisations mean by responsible use of AI that benefits both employers and their people.
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday