You might be suffering from AI brain fry : It’s Been a Minute – NPR

This post was originally published on this site.



BRITTANY LUSE, HOST:

John, when was the last time you felt like your brain got fried?

JOHN HERRMAN: (Laughter) I’d say most mornings when we’re trying to get our two children to school, like getting, like, a full fry, like a pan fry on both sides.

LUSE: (Laughter) I could see that. I could see that. Personally, I feel like my brain gets fried constantly. And to be honest, I am doing it to myself by looking at Twitter. Sick stuff for real. I’m on there all the time. But there is some new research showing that some people are taking more psychic damage at work thanks to AI. In an article published in the Harvard Business Review by researchers at Boston Consulting Group and the University of California Riverside, these researchers coined the term AI brain fry to describe, quote, “mental fatigue that results from excessive use of, interaction with, and/or oversight of AI tools beyond one’s cognitive capacity.” In other words, doing too much with AI.

There’s something kind of comically tragic about the idea that these tools that were meant to lighten our loads seem to be doing the opposite for some. But beyond the psychic damage, there’s a lot in this brain fried idea that points to how we work with AI. Like, with all the managing it needs, is AI turning us all into bosses? And is this really the future of work? To get into all this, I’m joined by John Herrman, tech columnist for New York Magazine.

Welcome, John.

HERMAN: Thanks for having me.

(SOUNDBITE OF TAPE RECORDER)

LUSE: Hello, hello. I’m Brittany Luse, and you’re listening to it IT’S BEEN A MINUTE from NPR – a show about what’s going on in culture and why it doesn’t happen by accident.

(SOUNDBITE OF MUSIC)

LUSE: How would you describe AI brain fry?

HERMAN: Yeah. I mean, the researchers, they describe this as basically hopping around between different tools and feeling overwhelmed. Not by just having to multi-task – which is already a problem in a lot of jobs – but by dealing with a whole bunch of output. So if you have a programming tool that can kind of run in the background and starts adding features to software really quickly, you have another tool that’s constructing a report from you, it’s searching the web and pulling together, you know, a market research document. You have another tool in the background that you’re in a, like, constant chat with trying to refine some idea for a talk you have to give – you’re just kind of getting first pulled in all these different directions, and then you’re kind of spamming yourself. Like, you’re just producing…

(LAUGHTER)

HERMAN: …All of this product. And it’s harder, you know, as you use more and more tools to keep track of, like, whether this output is actually relevant to your job, whether you’re doing anything that you need to be doing or whether you’re kind of creating new work for yourself. And so the researchers described in this survey of nearly 1,500 different people in different professions, this sensation of feeling kind of like, as they say it, fried or having, like, a brain fog, feeling kind of like mentally paralyzed by the amount of stuff that you have to keep track of and kind of check and monitor.

LUSE: I mean, it’s interesting. Like, what you’re describing, it reminds me of, like, when I used to work at a corporate job, and I worked mostly in corporate environments for a lot of my 20s. Like, you’re generating all this work content, and you’re sending it to your colleagues, and you’re not really sure what the point of it is, and you, like, are constantly toggling back-and-forth in between different things and checking the status. It reminds me of, like, all the things that I really hated about that kind of job.

HERMAN: Yeah.

LUSE: I can’t imagine adding a bunch of, like, different kind of bots to it that I had to kind of check in with. Yeah, because I already found that to be so unpleasant (laughter) as it was because I was just like, what am I doing? Is anybody seeing this? What’s the point? Why is this happening?

HERMAN: Yeah, that’s one thing that the researchers kind of get at where there are people who don’t really experience this. Like, this isn’t something that everyone experiences when they get, like, a new, you know, AI tool at work. But people who do are first, people who are using like a lot of tools at once. So this does really map on to, like, the experiences of a lot of programmers right now. Also, strangely enough, marketing professionals reported this a lot.

But they also found that it was much more prevalent in workplaces where, like, the roles were kind of poorly defined, where the, like, AI policies were vague and where managers were like, you know, you guys figure it out. And also, we sort of vaguely expect you to be more productive. And that leads people to just, like, overdo it and say, like, all right, well, I have this little thing I can ask to do stuff, and I’m just going to ask it to do a whole bunch of stuff. And it’s going to do a whole bunch of stuff and produce a whole bunch of, like, matter that looks like my work, and then I have to sort of check it.

I have to figure out what to do with it. I have to pull it together, not just into, like, the actual work that I need to get done but into my performance of being an employee, which is already, like, a source of stress in a lot of offices where most of your work is done on a computer. It’s like, how do I make it known that I am doing the job in the way that my managers want me to? Like, this isn’t just pure output in most cases. There’s, like, a social element. There’s a performance of hard work. There’s all this funny stuff that gets amplified and sort of made more severe when suddenly everyone in the leadership of your company is like, oh, well, we should expect our employees to do 10% more work.

LUSE: Yeah, yeah (laughter). I mean, I also want to note here, though, that, like, using some AI – and mostly for repetitive tasks – it can reduce feelings of burnout, according to the same research that we’re discussing. But the problem is mostly when you use it, like you mentioned, to handle a lot of different, more complicated tasks. In your piece, you said a friend of yours who works at startup and manages AI in this way, describes it like this. Quote, “ultimately, all work boils down to a single question, did I do this well, or did I F it up?

And what AI assistants do is massively inflate the size of the this in the question, with a massive increase in the surface area of things one is responsible for having possibly F-ed up. Like, to your point, like, if you’re at a company or in an environment where they’re like, oh, great. Like, AI tools mean that our employees can do 4,000% more stuff, perhaps has led a lot of managers or companies to, yeah, expect a wider range of this, like, a broader – a bigger pot of this, whatever this is. There’s just more output that’s expected.

HERMAN: Yeah. And with apologies to this friend, like, that is – this is someone who is, like, pretty intense about work, dives all the way into new stuff, is working at an AI startup that is, like, he’s writing software with AI to build AI tools. Like, this is the maximum possible exposure to this, and it does expose all these funny incentives that are, like, not so obvious until you make your whole work situation absurd.

And it’s, like, oh, well, I’m generating five times more code than I was before. Is your software better? Are you contributing to the project of your company in any way related to that number? Those become, like, much more urgent questions when you can just sort of become so prolific, if you worked in a place where your job involved making presentations, sending emails, compiling reports, like a sort of very generic, you know, office job, at a generic sort of white-collar workplace, like a life insurance company or something.

What does it mean that you can do all of that faster and more? Does it mean that you’re freeing yourself up for, quote-unquote, like, “more valuable” tasks or “more fulfilling” tasks, or does it just mean that you’re going to be expected to do the work of two people? Does it sort of mean that the scope of your job is bigger? Does it mean that the scope of…

LUSE: Yeah.

HERMAN: …Your job is smaller? Like, these are all unresolved questions. And what’s happening, and I think something that’s sort of, like, getting to people a little bit as they try to, like, keep up or not get left behind or whatever, is that their managers aren’t sure, they’re not sure, their companies aren’t sure. They’re just diving into new tools that don’t really have, like, useful norms around them, that don’t have, like, an abundance of, like, case studies and best practices and stuff like that.

It’s really kind of chaotic, and I think that’s just causing, like, beyond the scope of this paper, a sense of real, like, fatigue and anxiety in the workplace.

(SOUNDBITE OF MUSIC)

LUSE: We are going to take a quick break. But first, if any of you are finding IT’S BEEN A MINUTE for the first time, welcome. I hope you are enjoying the show and that you come back every Monday, Wednesday and Friday morning for brand-new episodes, and every Tuesday, a video episode. Tomorrow’s video episode is a pop culture syllabus. All the things you need to know to truly understand our current moment. You can find that video on Spotify or YouTube or just listen to the audio wherever you get your podcasts.

Coming up after the break…

HERMAN: Everyone’s going a little nuts and no one’s going more nuts than managers (laughter). And that, I think, is trickling down into, like, kind of a frantic, intense and kind of scattershot AI strategy.

LUSE: Stick around.

(SOUNDBITE OF MUSIC)

LUSE: I want to talk about something that I was really interested in, like, specifically interested in with your analysis. It’s the realization that delegating to AI is kind of simulating management. Like, AI is a poorly trained intern that you have to check the work of all the time. I wonder, like, is AI turning workers into bosses or at least simulated bosses? And if that’s the case, what makes it different from managing humans?

HERMAN: One of the weird things about AI, as we’ve been talking about it for the last few years, is that it’s constantly sort of made into a character. It’s anthropomorphized or personified or however you want to describe it. But what that does is it also screws up conversations like this a little bit, or at least makes them, like, harder to untangle. So shortcut to understanding this is like, yeah, managing a bunch of, quote-unquote, “agents,” you know, a kind of a humanized term, is kind of like managing people in that you’re delegating a bunch of tasks in these different silos and then you’re checking the output.

Like, that does sound a little bit like management, but it is more of a simulation of management, and then the longer you do it, the more you realize it’s definitely something else. You have the stress of delegation of assigning tasks, of dividing these things up, you have that cognitive load. You may have more output, but it’s yours. You also have potentially a bunch more liability, possible mistakes or the misjudgments – that all accrues back to you.

And I think if, you know, you, yourself, have a manager, the way they see what you’re doing is not that you’ve become a manager. They see that you should be able to produce more. You’re not actually getting promoted, I guess (laughter), is what I’m saying. And it’s hard to tell if the early sensation of feeling like a manager, which kind of feels like upskilling, which is, like, that’s what you want.

You know, like, you’re – I’m getting new skills, I’m getting new responsibilities. I’m sort of moving up in the economy as these changes happen. It’s hard to tell if that’s what’s happening or if, in fact, you’re being down-skilled or de-skilled and your job is being sort of made simpler. While it feels like you have all these amazing tools at your disposal, a lot of that is just automation. You’re kind of just monitoring stuff.

Now you have a smaller job. It’s a monitoring job. It’s stressful. You need to be vigilant all the time. You don’t get credit for the work, you get credit for the mistakes. Like, it’s a weird new thing. The joke about AI is that, you know, everything is AI until it works, and then it just becomes software. It’s just something you take for granted. Like, there are all these different things that we use in our computers that at an earlier time in history, down to the most basic stuff, like, the way they can do math or something, is, like, there was a time within living memory that the way that they did very basic stuff was like magic and now it’s not.

And we’re in this, like, process now where a lot of stuff is getting there. There are a lot of things that are kind of amazing to watch, like, one of the newer models work on. But I think that in the context of a workplace, people in charge are quicker to take for granted that the stuff is done and possible and always works, and the people who are stuck kind of like managing it and doing it and trying to make it happen are the ones who are going to have to eat the gap.

LUSE: Oh. (Laughter) So much of this conversation feels like going into, like, the basement of sort of how psychologically work actually affects us and, like, pointing a flashlight down there and seeing what scatters. But I don’t know. There’s also, like, the chat of all of this, like, chat meaning Slack and Teams…

HERMAN: (Laughter) Yeah.

LUSE: …And other apps that are instant messaging for work, like, as opposed to slower forms of communication, like email. And now, workers are speaking to AI and managing AI through similar chat mechanisms. What does the chat of all of this mean for workers?

HERMAN: Yeah. So I’m reading this paper and it’s interesting and intuitive and it all makes sense. Like, you have all these new windows, you’re switching between apps. There – all these things are moving down your screen really quickly. There’s all this output.

LUSE: Yeah.

HERMAN: It’s, like, of course, you feel kind of fried. But it made me think of previous reporting I had done on how many offices switched from workflows that were around meetings, emails, phone calls to chat – work apps like Slack or Microsoft Teams, or, you know, whatever other workplace suite you’re sort of stuck with in your computer job. And I worked at little, old media companies that were sort of run out of chatrooms early in my career. Later, I worked in big sort of news organizations that – as they adopted these chat tools. And the whole idea was like, OK, you’re always on. People always respond, it’s quicker, it’s more efficient. Obviously, it’s better than email because it’s just real time. It’s like sending text messages. But what happens if you aren’t deliberate about this is that it completely, like, blows up your workplace norms around when you’re working, when you’re not working.

LUSE: Yeah, that line gets so blurry.

HERMAN: Yeah. And people lose, like, decades of hard-earned understanding and intuitions about what’s appropriate, how you talk to someone based on, like, the power dynamic at work. But you have all these norms.

LUSE: Yeah.

HERMAN: It’s all sort of a little bit worked out. And of course, when email showed up, everyone had these same conversations. Slack shows up, chat shows up, and it’s just like, oh, everything feels urgent. Everything feels immediate. And we have to, like, renegotiate all these relationships again. And people who are not used to that – I remember when I worked at The New York Times, they, you know, were an email workplace. They transitioned to Slack. I was used to it. But I remember a lot of coworkers complaining like, the kids who are used to this are annoying. They’re chatting too much.

LUSE: (Laughter).

HERMAN: They’re too visible. They’re, like, performing their work in these group chatrooms, and like, what is this? Now, that’s just kind of part of the deal. Ten years later, all these offices have sort of moved over to this real-time communication. It just created this ambient sense of, like, urgency. It’s just another stressful thing about your job that you might get a message at any time. And you then have to, like, think about how you respond to it. It’s just another boundary that sort of dissolved.

And so one way to think about this is, like, if you have all these new tools at work and you’re, like, in conversation with software about all the stuff you’re working on, that is now also talking to you, potentially at weird hours. And it sounds absurd. Like, you can ignore it. Whatever. It’s just another app. But that’s not really what I think people are experiencing. They’re having new challenges to their workplace boundaries.

It’s like, well, I’m done with work. But also, I could just, you know, send a quick prompt that might get something started for when I come back in the morning. Maybe I’ll leave some stuff running overnight. And it’s just sort of like another boundary that I would say most large companies aren’t very good at helping their employees kind of enforce. It’s not really in their interest, or at least they don’t see it that way.

LUSE: Well, yeah.

HERMAN: And something that is sort of, like, left to individuals to navigate on our own.

LUSE: I don’t know. It just makes me wonder, like, is this just white-collar automation? Like, do we need to just get on board with this because it’s the way of the future, kind of like robots were for assembly lines? Or, I don’t know, is this something different?

HERMAN: I don’t think the answer is just, like, getting on board no matter what. But there’s certainly a lot of, like, pressure in that direction. I think the thing that stands out to me now and that doesn’t come up in this particular research, but does sort of come through a little bit, is that in a lot of jobs, the arrival of software automation, it corresponds with more surveillance and higher expectations and more tracking and quantification and stuff like that.

And so I do think if you find yourself dramatically increasing in some metric the amount of work you do with AI tools, one thing you can expect is that your job is probably going to change – with such a big jump, with so many new tools, that a lot of companies are going to be emboldened to track their employees in ways that white collar workers in many workplaces…

LUSE: Are probably not used to that.

HERMAN: Yeah, they’re not used to it. They will be sort of startled by it, and I think in some cases, will find themselves kind of understanding what it’s like to work in a job where surveillance is taken as a given.

LUSE: A lot of the work that people seem to want AI to do, whether that is, like, repetitive tasks or whatever sort of, like, AI chatbot sort of stuff they would like to work with – I don’t know. A lot of the work that they expect these AI tools to do is the kind of work that usually is or used to be done by entry level employees, which makes me wonder, like, if that’s the case, like, how are tomorrow’s higher-ups going to get trained up?

HERMAN: Yeah, I mean, I feel like there are two ways that you have to think about this that kind of bring you to different places. One is this high-level, like, big economic view, where you’re sort of asking, like, OK, if we have these new tools that can automate a bunch of knowledge work tasks, does that mean we have, in the long term, more jobs? And I think you can make a more comforting case at, like, the macroeconomic level that this is just, like, another transformation. That the economy will, through turbulent times, figure this out, that the economy may grow, that more productivity usually means more jobs and all that sort of stuff.

At a personal level, obviously there’s going to be, like, major interruptions. People will lose jobs, people have lost jobs, with AI cited as the reason. And if you think about the company you work at, or maybe the company that you manage, it might be a little more zero-sum (laughter) than these, like, economists talk about, right? Like, OK, if you work at some wonderful, productive, fast-growing company where everyone’s, you know, working really hard and contributing their individual skills and being compensated well for it, and there’s tons of demand for your product, well, you can probably make more of your product.

Your company can sort of regroup around these new capabilities and just keep growing, and there’s more growth and everything’s great. But lots of companies are kind of old, kind of stagnant. Maybe they haven’t been doing well. They just want to cut costs. So there are going to be a lot of people who are eager to cut costs and to justify that with AI, to try to make people who are left fill those gaps with AI. And I think that that’s one of the things that pushes people to the point of, you know, this brain fry phenomenon that we’re talking about.

A big part of it, a bigger part than a lot of people in the corporate world are willing to admit, is kind of narrative. Like, this is a story that everyone feels like they’re a part of. They don’t want to be left behind. They’re watching companies like Block cut a bunch of people and say, oh, we’re going to do this all with AI. They’re probably even, at this point, reading, you know, friends of friends on LinkedIn talking about not getting stuck in the permanent underclass. Like, everyone’s going a little nuts. And no one’s going more nuts than managers.

(LAUGHTER)

HERMAN: And that, I think, is trickling down into, like, kind of a frantic, intense and kind of scattershot AI strategy across a lot of organizations. And the people who have to sort of deal with that are the most anxious of all in a different way. They’re the people who are worried about their jobs. They’re worried about making sure that they seem to be on board with this stuff. But they’re also worried about this stuff of screwing up their livelihood. That’s certainly a recipe for, let’s say, mental distress at the workplace.

(SOUNDBITE OF MUSIC)

LUSE: Wow. John, I learned so much here. Thank you so much.

HERMAN: Yeah, thanks for having me.

LUSE: That was John Herrman, tech columnist for New York Magazine. This episode of IT’S BEEN A MINUTE was produced by…

LIAM MCBAIN, BYLINE: Liam McBain.

LUSE: This episode was edited by…

NEENA PATHAK, BYLINE: Neena Pathak.

LUSE: Our supervising producer is…

BARTON GIRDWOOD, BYLINE: Barton Girdwood.

LUSE: Our VP of programming is…

YOLANDA SANGWENI, BYLINE: Yolanda Sangweni.

LUSE: All right, that’s all for this episode of IT’S BEEN A MINUTE from NPR. I’m Brittany Luse. Talk soon.

(SOUNDBITE OF MUSIC)

Copyright © 2026 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

Accuracy and availability of NPR transcripts may vary. Transcript text may be revised to correct errors or match updates to audio. Audio on npr.org may be edited after its original broadcast or publication. The authoritative record of NPR’s programming is the audio record.

Leave a Reply

Your email address will not be published. Required fields are marked *