Scientists publish handover script to guard against AI technology harm – News

This post was originally published on this site.

The framework provides the prompts to script a handover conversation

GOVERNANCE of AI is lagging dangerously behind the technology’s spread though society, scientists have warned, so a new “responsible handover framework” has been launched to help users spot and manage the risks.

The framework has been developed by the charity Sense About Science and partners working in AI and data science. It can be used across sectors and society by those handing over or adopting AI-based tools.

Using healthcare as an example, Sense About Society warns that unlike the use of drugs and medical devices there are currently no rules for the safe development and rollout of software and apps, despite the equivalent risks of harm.

Tracey Brown, director of Sense About Science, said: “We’re already seeing examples where diagnostic applications for detecting skin cancer have been tested predominantly on white skin and don’t show up the same things on black skin.

“We’re seeing actual bodily harm as result of people not understanding the context in which they can deploy tools.”

The lack of guidance about appropriate use is a double-edged sword. It could also prevent people from adopting tools that could really help them.

The framework builds on those already used during the engineering and commissioning of physical infrastructure. It is designed to prevent crucial information being lost as an AI-based tool makes its way stage-by-stage from code development to use in the real world, between people with various levels of expertise and experience.

For example, a statistician who develops an AI tool might hand it over to a researcher but fail to provide the data used for training, the assumptions built into the model, or how the tool was tested. The researcher then tests the tool but less rigorously than they could have. They pass it to an app developer but fail to share the reasons behind any adaptations to the original model. And so on, with information being lost at every stage until it reaches a clinician who uses the tool to make decisions that affect people’s lives.

The framework provides the prompts to script a handover conversation that helps clarify the origin, capability, and limitations of a tool. It sets out the information that should be shared and the questions that organisations and users should ask.

Sir Peter Gluckman, chair of the International Science Council, said: “Too often AI and digital tools that can impact on health and society comes out in a rush, without evaluation of unintended consequences. There are lots of dangers to individuals, communities, civic life, and the environment if we don’t have responsible handover of these technologies.”

Tariq Khokar, head of data for science and health at Wellcome, said: “The increasingly computational nature of science and research means responsible governance for digital tools is only going to get more important.”

The framework is available here.