How worried should Americans be as AI threatens jobs? | Season 2026 – PBS

This post was originally published on this site.

GEOFF BENNETT: A number of polls, including our own, show Americans are increasingly worried about the economy and more pessimistic about where it’s headed than at any point in recent memory.

March brought the biggest jump in inflation in nearly two years, but it’s not just prices.

Americans are anxious about their jobs and specifically about whether A.I.

is coming for them.

Some prominent voices are calling it catastrophic.

Others say it’s all hype.

The data so far is somewhere in between and deeply contested.

Let’s try to break down some of this with Josh Tyrangiel of “The Atlantic.”

He’s spoken with economists, CEOs, and a number of experts all about this for his recent piece “America Isn’t Ready For What A.I.

Will Do to Jobs.”

Josh, thanks for being with us.

JOSH TYRANGIEL, “The Atlantic”: My pleasure.

GEOFF BENNETT: So there’s been a lot of talk in recent months about A.I.

coming for especially entry-level white-collar jobs.

You have done all the reporting.

Paint a picture for us.

JOSH TYRANGIEL: Sure.

What I really set out to try and figure out is, exactly to your point, like, what’s actually happening and what’s about to happen?

And so I did a little bit of a tour.

I spent a ton of time with economists.

And economists are divided really in two ways.

We have seen technological disruptions before.

And so economists, who love looking backward and comparing data, say — there’s a large school of them who say, look, if this plays out over a decade or more, there’s a natural rate of adjustment in the labor force, and it may be fine.

And it may even be better than fine, because what we see is that productivity could lift all boats.

And so A.I.

may deliver tremendous productivity.

They’re a cohort of much younger economists — and I point that out because the generational divide is really important here — who don’t think that their elders are wrong about the data.

They think they’re wrong about the tech.

So when you look back at things like electrification, which happened in the early 1900s, it took about 40 years to fully electrify America and to see the productivity in the data.

The difference is that A.I.

rolls itself out.

And you’re dealing with software that is inherently smart, that makes machines very, very smart.

And so this younger cohort of economists says, you’re not seeing in the data yet, but when you see it, it will be too late, because the tech moves that quickly, and we will see it in the job force and it will be too late for us to do anything about it.

And so they’re advocating for making plans right now.

What happens if unemployment gets to 10 percent, 15 percent?

What does a society look like when labor is that challenged and when you consolidate wealth all within one cohort of people?

And so what I discovered is that the economists are really at war with each other about what’s about to happen.

It’s a very calm war.

It’s a very polite war.

So, in my next phase of journey, I went to CEOs, and largely the CEOs of Fortune 100 companies.

They too are a little bit divided.

Some have made tremendous investments in A.I.

Others are a little late to the party.

Where they’re not divided is that they all said to me, look, Wall Street has watched us make these investments into A.I.

for the last three or four years.

This is the year they are going to expect action.

And, by action, they mean money.

And if we don’t have gains to show, that does mean we’re going to make cuts.

And we’re going to make cuts that we may say are A.I.-related or not, but we are going to replace labor with automation.

And that was very definitive across a bunch of the largest CEOs that I spoke with.

I would point out that most of them did not want to speak on the record, and that that itself is telling about the state of people’s anxiety and ambivalence about A.I.

in the economy.

GEOFF BENNETT: Yes.

Well, those younger economists who you say make the point that we need to make plans now, we know that’s not really happening in Washington.

But the CEOs you spoke with, when they talk about smart regulation, what do they have in mind?

JOSH TYRANGIEL: Well, I think, in fairness to those CEOs, they are not regulators, right?

And left to their own devices, their job is to maximize shareholder value, a phrase that they will repeat constantly.

And what they want ultimately is a fair system of regulation.

Now, they also want the right to say that they don’t want any regulation in public, while clamoring for it in private.

But they — for the first time in a long time of reporting, I sensed an eagerness among those CEOs for Washington to get involved, both in regulating the technology itself, but also to be driving contingency planning.

Because a lot of these folks, they like their work forces, but the moment a competitor slashes their work force and their stock price goes up, that competitive CEO is the one whose job is next.

And so this is what regulation is for.

It’s to step in when the market can’t control itself.

And what the CEOs were telling me is, essentially, they see a scenario where the market will not be able to control itself.

Efficiency rules all when it comes to shareholder value.

So they look to Washington.

And when I went to Washington, what I discovered is that this is just not really something that is on the minds of most mainstream lawmakers.

GEOFF BENNETT: Is there a way that people can in some way A.I.-proof their job or incorporate artificial intelligence into their work so that it’s useful?

JOSH TYRANGIEL: Yes, look, I think that the A.I.

industry has done a pretty lousy job of educating people about what it’s good for.

They have led with how much money they need to make it work.

They have led with job replacement.

They have led with the downsides of A.I.

And the truth is that it’s pretty-easy-to-use software, and it will benefit anybody who starts to play with it and figure out, how can I incorporate this into my job?

It’s not terribly scary when you use it on your own.

It’s also not flawless, as I’m sure you know.

We have seen statistics that Google Gemini has a nine out of 10 success rate at summarizing articles.

Nine out of 10 is not actually that good.

And so there’s still flaws with a lot of LLMs, but it’s here.

It’s here to stay.

And the best thing people can do to sort of A.I.-proof themselves is to actually figure out, how do I integrate this to make my work faster, better?

What are the lines around it that I want to draw for myself so that there are things I trust it to do and things I don’t trust to do?

Everybody’s mileage is going to vary based on their personalities, based on their jobs.

Sitting it out, waiting for regulation, you can’t duck and cover.

This is actually a really important technological moment, and I would encourage everybody to start thinking about themselves in relation to A.I.

GEOFF BENNETT: Josh Tyrangiel of “The Atlantic,” thanks again for your time.

We appreciate it.

JOSH TYRANGIEL: Thanks, Geoff.

Leave a Reply

Your email address will not be published. Required fields are marked *