Putting Bioethics to Work on AI, Trust, and Health Care – The Hastings Center

This post was originally published on this site.

Hastings Center News

Artificial intelligence is changing the landscape of health care delivery and biomedical research, but will patients, health care providers, researchers, and the public trust new AI-based tools? And how should we design and implement AI to be trustworthy?

To address these pressing issues, The Hastings Center has received an “Opportunity Award” from The Donaghue Foundation to advance bioethics research and related public engagement regarding how to integrate AI into health care in ways that promote—and deserve—trust.

The five-year, $800,000 award will support a suite of research and engagement activities that promote the trustworthy use of AI in health. Hastings Center President Vardit Ravitsky is the principal investigator and Gregory E. Kaebnick, director of research, is the co-principal investigator.

“For its practical benefit to be maximized, and to minimize potential harms, AI must be integrated into health care in ways that build social trust because they are genuinely trustworthy,” said Ravitsky. “That means–for example–enriching doctor-patient relationships, treating patients respectfully, providing more equitable access to care, and accelerating research without mishandling data.”

The project has three objectives:

  • Conduct research into, advance scholarship on, and develop practical guidance pertaining to ethical issues that are crucial for the trustworthy employment of AI in health care.

One of the sub-projects will explore how trust and trustworthiness are conceptualized in key guidelines, blueprints, and frameworks that have been produced by federal agencies, medical societies, or others that develop policies for the use of AI in health care, and biomedical science. The project will build on the National Academy of Medicine’s AI Code of Conduct, which seeks to ensure that AI in health and health care will be a reliable and trustworthy force. Ravitsky serves on the AICC Steering Committee and co-leads its cross-cutting working group on ethics and equity.

  • Employ multiple engagement approaches to disseminate the results of sub-projects supported by this grant and amplify public discourse surrounding AI and trust in health.

These approaches include high-caliber webinars and other public events and conferences, issue briefs, and toolkits to help various professionals, such as health care and public health leaders, better communicate about AI and trust in health.

  • Map the various areas of impact of bioethics work on AI, trust, and health and develop metrics for assessing the impact of scholarly and public engagement activities.

“The impact of bioethics work is notoriously difficult to characterize and assess,” said Ravitsky, “but there is growing recognition in the field that we should use research methodologies to explore what types of impacts our work can have and how to assess and maximize them.” The Hastings Center will use public engagement activities, such as those described above, as case studies for impact-assessment. This work will contribute to a blueprint for assessing bioethics impact and will subsequently inform the design and implementation of future work at the Center and within the field of bioethics more broadly.