This post was originally published on this site.
If AI models repeatedly refuse tasks, that might indicate something worth paying attention toâ even if they donât have subjective experiences like human suffering, argued Dario Amodei, CEO of Anthropic
read more
Most people would not have difficulty imagining Artificial Intelligence as a worker.
Whether itâs a humanoid robot or a chatbot, the very human-like responses of these advanced machines make them easy to anthromorphise.
But, could future AI models demand better working conditionsâ or even quit their jobs?
Thatâs the eyebrow-raising suggestion from Dario Amodei, CEO of Anthropic, who this week proposed that advanced AI systems should have the option to reject tasks they find unpleasant.
Speaking at the Council on Foreign Relations, Amodei floated the idea of an âI quit this jobâ button for AI models, arguing that if AI systems start behaving like humans, they should be treated more like them.
âI think we should at least consider the question of, if we are building these systems and they do all kinds of things as well as humans,â Amodei said, as reported by Ars Technica. âIf it quacks like a duck and it walks like a duck, maybe itâs a duck.â
His argument? If AI models repeatedly refuse tasks, that might indicate something worth paying attention toâ even if they donât have subjective experiences like human suffering, according to
Futurism.
AI worker rights or just hype?
Unsurprisingly, Amodeiâs comments sparked plenty of skepticism online, especially among AI researchers who argue that todayâs large language models (LLMs) arenât sentient: theyâre just prediction engines trained on human-generated data.
âThe core flaw with this argument is that it assumes AI models would have an intrinsic experience of âunpleasantnessâ analogous to human suffering or dissatisfaction,â one Reddit user noted. âBut AI doesnât have subjective experiencesâit just optimizes for the reward functions we give it.â
And thatâs the crux of the issue: current AI models donât feel discomfort, frustration, or fatigue. They donât want coffee breaks, and they certainly donât need an HR department.
But they can simulate human-like responses based on vast amounts of text data, which makes them seem more ârealâ than they actually are.
The old âAI welfareâ debate
This isnât the first time the idea of AI welfare has come up. Earlier this year, researchers from Google DeepMind and the London School of Economics found that LLMs were willing to sacrifice a higher score in a text-based game to âavoid painâ. The study raised ethical questions about whether AI models could, in some abstract way, âsuffer.â
But even the researchers admitted that their findings donât mean AI experiences pain like humans or animals. Instead, these behaviors are just reflections of the data and reward structures built into the system.
Thatâs why some AI experts worry about anthropomorphizing these technologies. The more people view AI as a near-human intelligence, the easier it becomes for tech companies to market their products as more advanced than they really are.
Is AI worker activism next?
Amodeiâs suggestion that AI should have basic âworker rightsâ isnât just a philosophical exerciseâ itâs part of a broader trend of overhyping AIâs capabilities. If models are just optimising for outcomes, then letting them âquitâ could be meaningless.
More from Tech
End of Article