Should AI ‘workers’ be given ‘I quit this job‘ button? Anthropic CEO says yes – Firstpost

This post was originally published on this site.

If AI models repeatedly refuse tasks, that might indicate something worth paying attention to– even if they don’t have subjective experiences like human suffering, argued Dario Amodei, CEO of Anthropic

read more

Most people would not have difficulty imagining Artificial Intelligence as a worker.

Whether it’s a humanoid robot or a chatbot, the very human-like responses of these advanced machines make them easy to anthromorphise.

But, could future AI models demand better working conditions– or even quit their jobs?

That’s the eyebrow-raising suggestion from Dario Amodei, CEO of Anthropic, who this week proposed that advanced AI systems should have the option to reject tasks they find unpleasant.

Advertisement

Speaking at the Council on Foreign Relations, Amodei floated the idea of an “I quit this job” button for AI models, arguing that if AI systems start behaving like humans, they should be treated more like them.

“I think we should at least consider the question of, if we are building these systems and they do all kinds of things as well as humans,” Amodei said, as reported by Ars Technica. “If it quacks like a duck and it walks like a duck, maybe it’s a duck.”

His argument? If AI models repeatedly refuse tasks, that might indicate something worth paying attention to– even if they don’t have subjective experiences like human suffering, according to
Futurism.

AI worker rights or just hype?

Unsurprisingly, Amodei’s comments sparked plenty of skepticism online, especially among AI researchers who argue that today’s large language models (LLMs) aren’t sentient: they’re just prediction engines trained on human-generated data.

“The core flaw with this argument is that it assumes AI models would have an intrinsic experience of ‘unpleasantness’ analogous to human suffering or dissatisfaction,” one Reddit user noted. “But AI doesn’t have subjective experiences—it just optimizes for the reward functions we give it.”

And that’s the crux of the issue: current AI models don’t feel discomfort, frustration, or fatigue. They don’t want coffee breaks, and they certainly don’t need an HR department.

But they can simulate human-like responses based on vast amounts of text data, which makes them seem more “real” than they actually are.

Advertisement

The old “AI welfare” debate

This isn’t the first time the idea of AI welfare has come up. Earlier this year, researchers from Google DeepMind and the London School of Economics found that LLMs were willing to sacrifice a higher score in a text-based game to “avoid pain”. The study raised ethical questions about whether AI models could, in some abstract way, “suffer.”

But even the researchers admitted that their findings don’t mean AI experiences pain like humans or animals. Instead, these behaviors are just reflections of the data and reward structures built into the system.

That’s why some AI experts worry about anthropomorphizing these technologies. The more people view AI as a near-human intelligence, the easier it becomes for tech companies to market their products as more advanced than they really are.

Is AI worker activism next?

Amodei’s suggestion that AI should have basic “worker rights” isn’t just a philosophical exercise– it’s part of a broader trend of overhyping AI’s capabilities. If models are just optimising for outcomes, then letting them “quit” could be meaningless.

More from Tech

End of Article