The AI industry has an urgent problem of disclosure. We must do a better job of telling people about what we see coming so that we can work as a society to confront the changes ahead.
Anthropic refuses Pentagon push to lift AI safety controls
Anthropic refused a Pentagon request to drop AI safeguards as the Defense Department warned the firm’s contract may be at risk.
We are living through the most consequential technology transition in at least a century. The kind of transition that reshapes how economies work, how power is distributed and how we all live our lives. Most people don’t see this happening, but they should.
And, more important, they should have a say in how the transition unfolds.
When my wife and I had our second child last fall, I stepped away from Anthropic, the AI company I cofounded, assuming I knew the fast pace of artificial intelligence development in my bones.
And yet when I returned to work in January, the technology within my own company had changed in ways that made my head spin. New ways of working. New technical breakthroughs. And the hard-to-shake feeling that while our foot is on the threshold, the world is not ready for the change on the other side of the door.
Simply put, the AI industry has an urgent problem of disclosure. We must do a better job of telling people about what we see coming so that we can work as a society to confront the changes ahead.
With AI, the stakes of change are higher and faster
Stanford economists have found, for example, that since ChatGPT’s November 2022 release, young workers in the most AI-exposed jobs have experienced measurable drops in entry-level employment.
And yet AI is expected to compress decades of progress in medicine, in education, in economic growth, into years or months. Studies already show AI systems sharpening rare disease diagnosis, helping university students learn more efficiently, and driving a level of economic investment that surpasses the dot-com era.
History tells us that people and public officials actively shape moments of transformative change. In 1840s Britain, as railways transformed the country but priced out working people, Parliament stepped in, requiring every line to run at least one affordable train a day, with seats and shelter from the weather.
In 1930s America, decades after electricity had transformed city life, farms and rural communities were still largely in the dark. Rural communities organized cooperatives, demanded power and got it.
With AI, the same need for public engagement holds, but the stakes are higher and the changes are coming faster. The decisions being made right now about which jobs AI augments rather than replaces, which diseases it helps cure and which communities it reaches first – these questions are yet unsettled. They will be shaped by what people know and what people ask for.
AI companies like mine have an obligation to make that engagement possible, which means sharing what we know and then listening to what comes back. The conversation has to flow both ways, and what people outside AI companies think should inform what we build.
That’s why in December, we invited everyone with a Claude.ai account to converse with an AI interviewer and share what they hope for and fear from AI. Nearly 81,000 people across 159 countries and in 70 languages took part. The same aspirations came up again and again: better work, more personal growth, more time.
A U.S. health care worker described feeling freed from the burden of document processing, giving her more time with her patients. A Chilean butcher, someone who had barely touched a computer in his life, built a new business. A Japanese software engineer said technical bugs that once took hours to sort out now resolve quickly, leaving time to cook dinner with family.
AI is too powerful for citizens not to have a voice in its use
But the fears are just as real. The people excited about learning from AI were also among the most likely to worry about losing the ability to think for themselves. Those most grateful for AI’s emotional support were most afraid of becoming dependent on it.
One concern stood out as the strongest predictor of how people felt about AI: What it will do to people’s livelihoods. In wealthier countries, job anxiety runs high. In lower-income ones, AI feels more like an opportunity than a threat. Both perspectives make sense. And both should shape what we build.
Building the future of AI means taking action. When we published the industry’s first safety framework tying safeguards to model capabilities, other leading AI companies followed suit.
We’ve published research on AI’s impact on workers and called for new economic policies, including possible taxes on our own revenue, to ensure gains are broadly shared.
We’ve also committed to covering the electricity cost increases from our data centers, rather than passing them on to consumers. And together with a coalition of major global companies, including Apple, Google, Microsoft, Nvidia and others, we’ve just launched an initiative to use AI to find security flaws in the software that runs the world’s banks, hospitals and power grids ‒ before hackers can.
The vulnerabilities we’ve already found have survived decades of human review. AI’s ability to find such flows will only accelerate.
These efforts reflect a principle: AI safeguards must grow in tandem with AI capabilities. But companies like mine cannot do this on our own; governments must engage. Public officials need to hear not just from the companies building AI technology, but from the public as well.
So make your voice heard. Write to your senator or member of Parliament. Tell them what matters to you. The frameworks being debated right now will govern a technology that will reach into every part of our lives.
Like the British railway passengers who demanded access or the rural American communities that organized for electricity, the people who shape this technology will be the ones who show up.
The moment to get engaged is now.
Jack Clark is Anthropic’s head of Public Benefit and a cofounder.