People find AI more compassionate than mental health experts, study finds. What could this …

This post was originally published on this site.

People find responses from artificial intelligence (AI) to be more compassionate and understanding than those from human mental health experts, a new study shows. The finding again demonstrates that AI can outperform humans in fields in which we’ve long assumed only people with shared experience are good at.

In the study, published Jan. 10 in the journal Communications Psychology, scientists conducted a series of four experiments to find out how 550 participants rated empathetic responses for compassion and responsiveness generated by AI versus those from professionals. Specifically, the participants gave information about personal experiences and then assessed the answers for compassion, responsiveness and overall preference.

The tests revealed that AI responses were considered more compassionate than those from professional crisis responders, even when the author of the responses was revealed to the participants.

The results suggest AI has uses in “contexts requiring empathetic interaction, with the potential to address the increasing need for empathy in supportive communication contexts,” the researchers wrote in the study.

On average, AI-generated responses were rated 16% more compassionate than human responses and were preferred 68% of the time, even when compared to trained crisis responders.

Related: AI faces are ‘more real’ than human faces — but only if they’re white

Study lead author Dariya Ovsyannikova, a lab manager at the University of Toronto’s psychology department, attributed the AI’s success to its ability to identify fine details and stay objective as crisis experiences were described. This made the AI better able to generate attentive communication that gave the user the illusion of empathy. At the same time, the humans may have performed worse because human responders are susceptible to fatigue and burnout, she added.

Live Science asked Eleanor Watson, IEEE member, AI ethics engineer and AI faculty at Singularity University, what the finding means, not just for the future of AI-human interactions but the ongoing debate about which jobs AI can’t or shouldn’t do when human understanding and input seems critical.

Watson called the finding “fascinating” but wasn’t altogether surprised. “[AI] can certainly model supportive responses with a remarkable consistency and apparent empathy, something that humans struggle to maintain due to fatigue and cognitive biases,” she told Live Science.

“Human practitioners are constrained by their direct clinical experience and cognitive limitations. The scale of data AI can process fundamentally changes the equation of therapeutic support. It can also potentially enable patients to gain perspectives or approaches their therapist has not been trained in,” she said.

Accessible mental health care

Globally, mental health care is in crisis, and the study raises the possibility of AI filling the gaps. According to the World Health Organization, more than two-thirds of people with mental health conditions don’t get the care they need. In low and middle income countries that figure rises to 85%.

Watson said the ease in accessing AI versus human therapists could make it a useful tool to help with mental health provision. “The availability of machines is a welcome factor, especially compared with expensive practitioners whose time is limited,” Watson said.

“Also, people often find dealing with a machine less daunting, particularly with more sensitive topics. There’s less fear of judgment or gossip.”

But finding AI-generated responses more empathetic doesn’t come without risks. Watson warned of the specter of supernormal stimulus, which is the tendency to respond more strongly to an exaggerated version of a stimulus.

“AI is so enticing we become entranced by it,” Watson said. “AI can be flirty, insightful, enlightening, fun, provocative, forbearing and accessible to the point where it’s impossible for any human being to measure up.”

Content about mental health also exacerbates the privacy issues associated with AI. “The privacy implications are stark,” Watson noted. “Having access to people’s deepest vulnerabilities and struggles makes them vulnerable to various forms of attack and demoralization. Scrupulous governance of systems and the organizations behind them must be upheld to defend against exploitation by bad actors.”