AI Job Loss Is Coming. Does Anyone Have a Plan? – New York Magazine

This post was originally published on this site.

Illustration: Brandon Celi

The tech companies have ideas. In early April, OpenAI — whose most cheery prediction says 18 percent of jobs will soon be automated — rolled out a plan for a “New Deal” for workers: a 32-hour workweek, a public wealth fund, a tax on capital gains. On the moderate end, Anthropic’s Dario Amodei admitted that AI job disruption was “a macroeconomic problem [so] large” it may require a whole new tax code — with a duty levied “against AI companies in particular.” But while the country’s two leading AI companies talk about a dramatically different landscape for the American worker, Congress has been largely silent.

Maybe it was holding off for hard data. Until recently, the stories of vast displacement by AI were mostly anecdotal and met with skepticism. When Amodei told Axios last year that AI could “wipe out nearly half of all entry-level white-collar jobs,” one couldn’t help noticing the statement helped his company’s valuation. And when former Twitter CEO Jack Dorsey laid off more than 4,000 staff at his new firm, Block, citing AI, critics were quick to point out artificial intelligence was just as likely a convenient excuse to fire people without having to fess up to bad hiring or poor profits.

The mood, however, is starting to shift. In March, Goldman Sachs issued a report that about 7 percent of workers will be displaced by AI. The Federal Reserve Bank of New York found that 2025 ended with the highest unemployment rate for recent college grads in years. Yes, another recent report challenged the “AI-job-apocalypse narrative” — but this one from MIT did so mainly on speed: “2027 is too aggressive an estimate for AI to broadly eclipse the performance of human workers,” a statement on the data said. “AI will achieve 80 percent success rates on most tasks by 2029.” (So much better.)

Even if a politician doubts we are headed to AI hell, one would think self-survival would push forward some bold proposals. Seventy-one percent of workers in one recent poll say they are afraid of job displacement from this technology. Pro-AI PACs like Leading the Future are already dumping millions into the midterm elections. Clearly, the companies are worried something could be done by those in government. (It’s doubtful the millions are actually being used to push OpenAI’s “New Deal.”)

We asked five members of the political class who have been vocal regarding AI about their colleagues’ relative silence. What are politicians saying behind closed doors? And why isn’t there a big plan to deal with AI from D.C.? — Jacob Rosenberg

I thought that policymakers at alllevels, especially the federal level, would be eager to jump in on this issue because it’s likely to be the defining one of the next decade. This is not personal computing; it’s not like electricity. The entire purpose of this technology is to replace human intelligence and human labor.

I’ve made the analogy to the “China shock,” which totally changed manufacturing in the United States. The China shock led to significant job losses and political realignment. But the potential job losses from AI are five times that and therefore have the potential to be even more transformative to our politics. The 2028 election is likely to center on the impact of AI on the country — and two competing visions for how to deal with it — in the way that COVID really shaped the 2020 election.

Part of my obsession after spending three years in the Biden administration is that you can’t just look at a chart of the economy and say, “Well, real inflation-adjusted median household income has improved, which means that people’s lives are getting better and they’re happy.” The economy is much more complicated than that, and there are these feelings about uncertainty, autonomy, fairness: “Why is it that this is being taken away from me and I see a bunch of other people around me who are seemingly at random getting really rich, but I can’t get ahead?”

When you talk to Democrats, a lot of the unspoken response is “Well, the other guys are in charge for the next three years, and if I propose some kind of sweeping expansion of the social safety net and worker empowerment and expanding unionization, it’s going to go nowhere.” Of course, nobody’s going to come out and say, “The reason I’m not talking about this very much is that I don’t want $10 million dumped on my head in my election” — but I do think that that’s in the back of people’s minds, too.

I also think that there’s a legitimate tension here, which is — look, other countries are moving forward with this technology, whether we do so or not. China’s not going to stop, and some of these folks in the Middle East are not going to stop, and countries in Europe are not going to stop. And so isn’t it better for the U.S. to lead that race and to be able to set the standards globally rather than China?

What I would say to them is unless you convince people that the adoption of this technology is going to somehow make their life better, then there’s going to be a political groundswell to stop it.  — As told to Jacob Rosenberg

I’m not yet in the school that says, “Yep, 40 percent of people are going to lose their jobs.” We don’t know. It’s going to work in odd ways. But what we do know is that it’s unlike how shipping jobs overseas and the shift in trade policy wiped out manufacturing. That was confined, it was geographically largely confined, and it was industry confined because the central insight was people will build things in Asia so much cheaper than they will build them here in the U.S. that we can afford to have our work done in Asia and still pay the transportation cost to bring it all the way across the ocean and make a bigger profit than making it here at home. I’ve just described the whole multitrillion-dollar effort that destroyed much of middle-class America and unions and the American Dream. But it was comprehensible. This one will hit in all kinds of places that are hard to predict. So I actually talk with some of the big thinkers on this, and I say, “What does that mean for the worker?” And they say things to me like, “It’s absolutely crucial that a worker be flexible and resilient.” Those are the words you hear over and over and over.

So what does it take to be flexible and resilient if your job may disappear in the blink of an eye? If you were to have the magic wand and could say, “AI is coming, this disruption is coming,” what would you do? Wouldn’t you say, “You know what? We better unhook health care from jobs and make sure that everybody in the country has health care”? In other words, one way you’re resilient is, oh, I don’t know, you might call it Medicare for All, universal health care. You would not leave health care tied to jobs.

What’s the next thing you do? You would change our unemployment insurance. You’d beef it up; you’d take it out of its 1935 mind-set. All the things that we found were broken during COVID that we didn’t fix, you’d come back and you’d fix that. What’s the third thing you’d do? You’d make post-high-school education free or nearly free. So every therapist who gets knocked out of a job has an opportunity to retrain in something else — to learn a new skill without having to go tens of thousands of dollars into debt. You’d have universal child care. So if Mama or Daddy can get a job, they can get right back into the job market. If they don’t have a job, they can still keep their child-care spot so they can have the care they need to get an education if they have to go back to school. So part of my point is we in Congress should be thinking about the regulation of AI, but we should also be thinking about the resilience of working-class America. How do you strengthen the safety net for all our people?

So why doesn’t leadership of either party seize on this? And the answer is because billionaires don’t want to pay their fair share. Why do we not have universal child care? Because it costs money and Jeff Bezos would have to pay more in taxes. Why do we not have health care that works for more Americans? Because it was more important to the Republicans to do a $2 trillion tax break for the ultrawealthy and corporations. They literally used cutting people off their health care to pay in part for their huge tax cuts that go to the top. We’re watching people, millions of them, colliding with the powerful who don’t want to hear this. — As told to Rebecca Traister

I’m going to give you two statements that are both true: There will be many jobs created by AI that we cannot possibly predict; millions of people are going to lose their jobs to AI. The proportion is going to be way, way off. I can say very, very confidently it’s going to eliminate millions of call-center jobs and retail jobs, coding jobs, and eventually driving jobs. I’m sure that it will create thousands of new jobs that don’t presently exist. It’s just the ratio is going to be ten to one.

Let’s say I’m sitting with an average tech CEO — non-billionaire variety. He would say, “Hey, this is real. I’m going to fire 50 percent of my workers over the next five years. I don’t know what my kids are going to do for college. And that’s my life.” That tech CEO is not going to somehow start going on the news advocating for an AI tax or universal basic income or anything like that. But they see it all happening. They feel bad for some of the workers they’re going to fire, but there’s not really a room where people come together and say, “Okay, guys, we’re going to all come together and do this or that.”

I would put this AI job apocalypse — or what I’ve christened “the fuckening” — in a category of a number of problems. It’s a proud American tradition. It would be a bigger surprise if people got together in a room and came out and said, “Hey, here’s what we’re going to do at the end of the day.” Because the American system doesn’t actually lead to that happening.

So there will be a whole parade of 2028 candidates saying, “I’m deeply concerned about the effects of AI, and we should examine the impacts and take it very seriously.” What does that mean in terms of actual legislation or policy? Unclear. Several politicians have reached out to me. This is actually the way they frame the question, which is funny: “Short of UBI, what do we do?” — As told to Jacob Rosenberg

The potential domino effect is this incredible stratification of our society where you have a massive unemployment rate for everybody who doesn’t touch this technology. I just think that that is not how you sustain a democracy. It’s not a recipe for social stability. It is a recipe for disaster. And we’ve got to make sure that does not happen.

I don’t think people really yet appreciate the significance of the potential threat here. And that’s one of the reasons that Senator Mark Warner and I have introduced legislation that would require the government to collect information on AI job impact. I say this to everybody who tells me, “Oh, Josh, you’re an alarmist.” Well, fine, let’s get the data. Then let’s require the government to report regularly, multiple times a year, about the number of jobs created and the number of jobs lost due to AI. If we can’t agree on that, I don’t know what we can agree on.

I will tell you that I hate the universal-basic-income idea. I hate it because it takes away the independence and dignity of work. People want to work.

And if people are going to get laid off because of AI, I don’t know how they’re gonna pay their health-care bills. I think we should say, “No taxes on health care for all Americans.” Whatever you’re paying on your premium ought to be tax free; whatever you’re paying on your prescription drugs ought to be tax free. Go down the list. We ought to cap the price of prescription drugs — we shouldn’t allow these drug companies to charge us 300 percent more for the same drug as they’re charging somebody in France. — As told to Simon van Zuylen-Wood

I think it was ten years ago when I went up to the microphone at the Democratic retreat. You know, when we go off in a hotel somewhere to sort of stare at our navels and decide what the future of the party is? So I went up to the microphone and said, “AI is coming at us, and we’re not ready for it. And the biggest effect is it’s going to drive the market value of human labor toward zero.”

It was really Google — you know its “Transformer” paper? It was after that that I started getting communications from some of my friends in comp-sci that said, “You would not believe what Google has come up with, these large language models.” And then ChatGPT commercializes them fast and without many controls, frankly.

Anyone who spends a big part of their day staring at a screen has their job at risk. AI can sit there and watch what you’ve done for a while and then step in and make a pretty good imitation. Ten years ago, people were worried about self-driving trucks taking’ jobs, stuff like that. I’ve been a little surprised at how long that’s taken — although it looks like it’s finally happening too.

We’re going to have to fundamentally rethink the value proposition for being a human being in the world. The value of a human has to be something different from the market value of their labor.

Historically, I have not supported single-payer health care, all right, and I’m going to switch my position in support of that on the basis of AI.

And if you want money to move in the economy, there is no logical alternative to reaching deep into Elon’s pocket, redistributing the money at the base of the pyramid. Well, you can do a better job of public infrastructure. That is another way you can redistribute a lot of wealth. If everyone has really nice walking trails and parks right outside where they live, that is a source of great personal enjoyment — and even though you don’t have much of a job, or the job is no longer what defines your life, you’ve got a beautiful nature preserve. It’s — you know, it’s a future that you can imagine for your grandchildren without feeling depressed.

The challenge will be to figure out how to subsidize human interaction. I think maybe one way to reward organizations and people who do something positive for human interaction is that they should all get subsidized some number of pennies per minute when one human is looking into another human’s eyes. So if you set up an archery club where you all go out drinking afterward and laughing and looking into each other’s eyes, that should be your economic reward.

I don’t know if you remember the first time you ever looked into a potential girlfriend’s eyes and you sort of felt your heart flutter. A very large fraction of our brains is dedicated to recognizing the faces and analyzing the emotions of our tribe members. So I think that if you want to figure out how to make humans happy, look at the tribal societies we evolved in and try to reproduce that in a way that doesn’t involve having raids on your nearby tribes. — As told to Simon van Zuylen-Wood

“Attention Is All You Need” (2017) proposed a new way for neural networks to look at language, allowing a jump in the speed of AI chatbots.

Leave a Reply

Your email address will not be published. Required fields are marked *