Feed Drop: How AI Will Change Your Job With MIT’s David Autor

This post was originally published on this site.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

More in this series

Today’s episode is a bonus drop from our friends over at the MIT CSAIL Alliances podcast. We’ll back in two weeks for Season 11 of Me, Myself, and AI.

David Autor, the Daniel (1972) and Gail Rubinfeld Professor, Margaret MacVicar Faculty Fellow in MIT’s Department of Economics, says that AI is “not like a calculator where you just punch in the numbers and get the right answer. It’s much harder to figure out how to be effective with it.” Offering unique insights into the future of work in an AI-powered world, Autor explains his biggest worries, the greatest upside scenarios, and how he believes we should be approaching AI as a tool, and addresses how AI will impact jobs like nursing and skilled trades.

Studies and papers referenced in this conversation:

CSAIL Alliances connects business and industry to the people and research of MIT’s Computer Science and Artificial Intelligence Laboratory. Each month, the CSAIL podcast features cutting-edge MIT and CSAIL experts discussing their current research, challenges, and successes, as well as the potential impact of emerging tech. Follow the podcast here.

Subscribe to Me, Myself, and AI on Apple Podcasts or Spotify.

Transcript

Sam Ransbotham: Which two jobs is AI most likely to automate away? On today’s episode, MIT economist David Autor addresses this question and speaks to a variety of AI research projects at MIT, Stanford, and beyond. Today’s episode is a bonus drop from our friends at CSAIL. CSAIL is the MIT Computer Science and AI Laboratory. Have a listen as Autor speaks with Kara Miller about how AI research is translating to real-world applications. We’ll be back in two weeks for our first episode of Season 11.

Kara Miller: Welcome to MIT’s Computer Science and Artificial Intelligence Lab’s Alliances podcast. I’m Kara Miller.

On today’s show, MIT economist David Autor, who has helped us understand how past technological transitions affected jobs, gives us an inside look at the latest research on AI and employment. And while AI can effectively scale in some situations like customer service, in others, it’s going to have trouble.

David Autor: We can have a Taylor Swift of popular entertainment who can entertain 2 billion people, but we can’t have a Taylor Swift of oncology or of medicine in general who can take care of 2 billion people. There’s going to be a lot of people involved. Even if we made the best doctor 10,000 times better, that doctor would still have very finite capacity.

Kara Miller: But what if AI makes your expertise less important and kind of elbows you aside? Autor says it doesn’t necessarily mean you’ll be out of a job, but. …

David Autor: Generally when people are displaced from expert work, they’re going to tend to move downward. If you’re doing the most well-paid thing, most expert thing you can do, in all likelihood that’s your job. That’s why you do that job.

Kara Miller: That’s all coming up in just a minute.

If you want to know how AI is going to change the labor market, how it’s going to change your job, your neighbor’s job, and when exactly this is all going to happen, there’s probably no one better to ask than David Autor.

David Autor: I think a lot of people are sort of waiting. Everyone else believes that it already changed everything, but it hasn’t changed it for them.

Kara Miller: Autor has served as a sort of national voice on the consequences of automation and outsourcing, and he’s particularly proud of being cited by John Oliver a few years back on HBO’s Last Week Tonight.

John Oliver: When automation does lead to job loss in certain sectors, historically, it’s also actually created jobs, as this economist from MIT explains.

David Autor: Let’s do the following thought exercise. It’s the year 1900, and 40% of all employment is in agriculture, right? And so some twerpy economist from MIT teleports back in time and says, “A hundred years from now, only 2% of people will be working in agriculture. What do you think the other 38% of people are going to do?”

John Oliver: Well, I wouldn’t know.

David Autor: We say, “Oh, search engine optimization, health and wellness, software, and mobile devices.” Most of what we do barely existed.

John Oliver: Exactly. That twerpy economist is right.

Kara Miller: So here’s the deal when it comes to AI: Autor says AI does not have to revolutionize everything to radically change the economy.

David Autor: Annually, productivity grows in the United States, in a healthy developed country, 1.5% to 2% a year. If we could raise that to 3% a year, that would be incredible. Over the course of decades, that comes to an enormous amount of economic growth. So it doesn’t have to be revolutionary in every domain. If we could get one more point of economic growth annually … it would solve so many problems. It would solve problems with national debt. It would solve problems with the aging population. It would solve lots and lots of problems.

Kara Miller: But which jobs will it upend most? Autor notes that if you survey people and ask them what job is most impervious to AI, they’re pretty clear on the answer.

David Autor: They will always answer in two parts. The first job they will say is nurse, and then the next job will be whatever it is they already do. And many corporations are just sort of throwing it at their workers and [saying], “Here, use this for something,” and they don’t know what it’s for. The only people who are making just vast wild claims about what it’s already accomplished are the people who are selling it.

Kara Miller: Which is not to say that Autor thinks AI won’t change our jobs. We’ll get to specifics in a couple of minutes. But zeroing in on the professions, the industries that will be most affected by disruptive technology, that may take a little time, as it always has.

David Autor: Social scientists and, I would say, engineers as well are still in the same phase as everybody else in discovering, well, what is this good for and what does it do? And that process can take a very long time. It took a long time to figure out what electrification was for. It took a long time to figure out what we should do with lasers or GPS, and it took a while to figure out what were good uses of smartphones and the internet and so on. So if there was no progress in AI for the next two decades, except hopefully greater power efficiency, we would still have two decades of discovering to do, of saying, “Well, here’s where we could use it well,” or “Here’s a great application,” or “Here’s where we should not use it,” or “Here’s a new idea of something we could do with it that we’re not already doing.”

Kara Miller: So maybe it makes total sense that people feel like AI has been moving kind of slowly, at least in their own careers. Both employers and employees are trying to figure out how it can be most helpful. And then there’s this: In lots of jobs, the stakes are high and AI doesn’t get things right a hundred percent of the time. So whether you’re in medicine or banking or manufacturing, the rush to AI, it’s going to be tempered with some caution. Plus, and this is crucial to remember, every day we are running experiments on workers and seeing how they turn out.

David Autor: We’re very much in the “poking a stick at it and figuring out what happens” phase of this technology. AI is not a better, cheaper, faster version of any other technology. It’s actually not very good at many of the things that traditional computer software is good for. It’s not actually good with facts and numbers, for example, which seems ironic for cutting-edge software. So it’s hard to predict, and we should understand that we are still going to discover what it’s useful for and what it’s not useful for.

Kara Miller: The key, Autor says, is not to think of this as an exercise in predicting the future because that makes no sense.

David Autor: We shouldn’t treat the future as just a forecasting exercise, like it’s somehow knowable if we could just look clearly. It’s something that we are creating. It’s a design exercise, and so what the future will look like and how work will change partly depends on what choices we make. We can use AI, for example, to build an incredibly powerful surveillance state that censors content in real time and monitors political speech, and figures out where we are in any moment. And that would be one future of work and many other things, but that wouldn’t be AI’s choice. That would be our choice about what we want to use AI for. So there are a lot of things we can do with it, and we should recognize that we have agency. And so the future is not knowable in part because we don’t know what we’re going to do.

Kara Miller: But, future aside, as I said, there is data coming in all the time on the jobs that will be changed and are being changed by AI. One stunning recent example comes from a large American company that works on material science. To develop new materials for all sorts of real-life uses, they employ people with expertise in chemistry, physics, and engineering. Aidan Toner-Rodgers, a student of David Autor’s, recently unveiled his work on this company. Here’s Autor describing the research.

David Autor: Material science is the part of engineering that essentially brings you new glues and new ultra-high-strength metals and things that can tolerate different temperatures and so on. So it’s really important to the way all products are engineered, to be able to create these specific properties. And they have over a thousand Ph.D.s, and they did a rollout of a generative AI product, not a large language model [LLM] that basically does what’s called inverse materials discovery. You tell it the property of the material that you want, and it predicts a material design that would have those properties.

Kara Miller: OK, so you’ve got all these scientists, they are used to coming up with new materials without AI, and all of a sudden they’ve got this new tool. What happens? Well, a lot.

David Autor: It increased the discovery rate, the rate of new patent applications, and the new rate and the rate of commercialization by 20% to 40%. So it was a big success, and the materials, according to their metrics, were closer to design specifications than previously and also more novel. So that’s a big deal.

Kara Miller: But the AI was not just successful in pumping more novel materials out of the R&D lab. The other part of the headline finding from this paper was just who the AI helped most.

David Autor: For the scientists who were very productive initially, it was very helpful, and they became much more productive. The reason is, interestingly, because the AI does a lot of proposing ideas, and it’s the job of the scientists to dispose of the [bad] ones and just keep the good ones. And scientists who had good judgment and expertise were able to do that very effectively. But for many other scientists who didn’t have that kind of instinct or judgment, or maybe that training or practice, they were basically choosing at random from the ideas that the AI spit out. It didn’t make them better, and it made their jobs less pleasant as well. So there’s a kind of complementarity between human judgment and expertise in this setting and machine judgment or machine expertise.

And I think looking for that complementarity is going to be quite critical in figuring out how to use AI well. And often when it goes wrong, it’s because AI is producing legal briefs or academic documents or writing where the person allegedly producing it doesn’t have the expertise to evaluate what’s good, what’s reliable, and what’s hallucination, or what is just bad reasoning or a bad argument.

Kara Miller: What’s interesting is you’ve done a lot of work on inequality, and what’s interesting about this finding is that this is not a case of AI allowing me to create a website even though I don’t know how to code. That’s OK, I don’t have to have as much expertise because AI can just use English and do the coding. This is really a situation of the best people; the smartest people are even further away from the other people. This just amplifies expertise in the way that we’ve seen in other places too. Did that surprise you?

David Autor: Yes and no. First of all, I agree with your take that it really, in this setting, seemed to benefit the superstars and not others. So why [do] I say it does and doesn’t surprise me? The part that doesn’t is — I feel like this is actually a consistent subtext of many studies on AI — where AI is useful to people, it’s where they have the right judgment to work with it, and it can support judgment, but it can’t replace it. So it can help you to do something that you don’t have a lot of expertise in, but if you don’t know how to evaluate the quality of the output, you’re in trouble. If you’re not a lawyer and you use it to write a legal brief, chances are it will have errors. If you use it to support your mathematical explorations, it may be helpful to you, but it may get the wrong answer too.

This is a setting where the right scientists were able to make the right judgments and others were not. We need to be looking for those use cases. For example, it can allow professional copywriters to write copy more quickly. That doesn’t mean a middle schooler could then do copywriting just using the same tool. Or, for example, [in] the case of customer support, we’ve seen that it helps people to become [an] expert more rapidly because it essentially models what a good reply looks like and so on. But I do think it’s the right question to ask: Is this all going to be about superstars, or is this going to be democratizing?

I think it’s going to be some of both. I mean, there are certain domains. You can imagine material scientists. They say, “Well, maybe we only need the top quarter of scientists. They’re going to be so much more productive.”

Kara Miller: Right. Yeah, exactly.

David Autor: You could also imagine that they would say, “Well, actually, we’re going to design a lot more materials.” Or you could imagine they might say, “Hey, we could figure out how to get the rest of these scientists good at this as well. We just hadn’t given any thought to how you train them.” But in some fields, this is not an option. So we can have a Taylor Swift of popular entertainment who can entertain 2 billion people, but we can’t have a Taylor Swift of oncology or of medicine in general who can take care of 2 billion people. There’s going to be a lot of people involved. Even if we made the best doctor 10,000 times better, that doctor would still have very finite capacity. And so we really want to make everyone better, even if we make some people much better. So I think there are many settings where it’s not going to be an option to just replace quantity with quality.

Kara Miller: Do you worry though? … We’ve seen a trend in the last few decades of the best computer programmers, the best scientists, the best people in finance really pulling away in a compensation type of way. And I wonder if … as you say, it’s not going to probably happen with kindergarten teachers, but if in certain professions that are substantial professions, if you worry [that] yeah, they can just get away with a lot fewer people and just really pay those people a lot of money.

David Autor: Yeah, I do have this concern for sure. Now, it’s important to recognize, actually, that the growth of inequality has actually slowed, and in fact, in the United States, earnings inequality — not overall inequality — actually has come down substantially since the pandemic, and the return to education — the college-high school gap — basically plateaued about 20 years ago. It’s still very high, but it’s not rising. I guess my concern is there’s some of the superstar stuff, I think as you mentioned, [that] just makes the top person in finance and the top person in software and the top person in chip design and so on.

I worry more about people who are [in] middle management or personnel or … the person who writes the letters of reference or the letters of reprimand or the improvement action plans and so on. So I do worry about displacement of middle management. I also worry in terms of maybe customer support. If I were a professional translator for a living, I would be very concerned not because I don’t think there will be any more language translators, but most translation will be done by machines, and only the very expensive occasions where the stakes are very high will we bring in expert consulting translators who will work with the AI as well.

On the other hand, there’s another case to be made that these tools will actually enable more people to do some of these high-judgment, high-stakes tasks to deliver more medical services or basic legal services or to develop software or to do design or do a better job in skilled repair and construction.

You could imagine that it could create more competition for the top and allow more entry in the middle. That would be the good scenario. That’s something not just to hope for but to attempt to design for. For example, in medicine, I think that would be the best-case scenario, where you have people of different expertise levels all up and down the line. You have doctors, you have nurse practitioners, you have nurses, you have technicians. And to allow the folks who are not at the very top of the pyramid to have better tools and expand their scope of practice to do it more effectively, that would be good for them. It would be good for patients and would be good for the overall delivery of health care.

And then, of course, if we are effective at raising productivity with AI, that creates a lot of potential benefits. That creates a tailwind that tends to raise living standards across the board. So I don’t know how equalizing it’s going to be. I mean, every technological wave is different. Sometimes people say, “This time will be different.” They’re always different. I would be surprised if this one looks exactly like the last one, if it just continued trends. I can’t say it won’t happen, but that wouldn’t be my prediction.

Kara Miller: One final piece about this new research that just caught my eye: You have this AI program, it’s really helping, at least the people at the top, to be more productive to find these new materials. But one fascinating thing is that most of the scientists appeared to like their job less after they adopted this AI program: 82% saw a decline in well-being, and it sounded like basically people were like, “Gee, I got into this line of work thinking that the job was X, and now you’re saying I’m supposed to sit around and judge AI’s creativity. I’m not as happy about this.” I just wonder what you make of that piece of it.

David Autor: I think it’s what people call the boring apocalypse, which is we’re just inundated with AI garbage. I write a paragraph. I hand it to ChatGPT and say, “Turn this into a 150-page report.” Then it does that and then I email it to you, Kara, and say, “Here’s my 150-page report.” Then you feed that to ChatGPT and say, “Turn this into a paragraph.” I think it’s not a good world in which the job of humans is basically to just keep an eye on the machinery and make sure it doesn’t make a mistake. It’s very hard for people to do that well. It’s hard for people to attend to that type of problem. This is why the Tesla self-driving cars are potentially dangerous.

It’s not because they don’t have pretty good software, but it’s just very hard for people to pay attention if the machine is doing much of the work. If you gave lawyers the job of reading machine-read legal briefs and looking for errors, I bet they would fail to catch many errors that they would never themselves have made. And that’s what’s happening with these material scientists. They previously spent about a third of their time on generating new ideas, and that fell to a sixth of their time. And they spent much more time evaluating AI’s ideas and deciding whether or not they would work. And I think that is a real worry. So we need to design systems in a way that retains engagement. When that doesn’t happen, it can have really quite tragic consequences.

There’s at least one documented, very terrible major airline crash that was caused essentially by the autopilot acting correctly and correcting the situation meant to shut itself off because it couldn’t determine the velocity of the aircraft. The pilots at that point had essentially lost the instinct for high altitude, non-assisted flight. And so they couldn’t do it. And the end result was tragic. There was nothing wrong with either the pilots or the autopilot or the aircraft fundamentally, but the interaction between machines doing too much of the job and people losing the touch for it was really quite problematic. So we need a world in which people are engaged.

Kara Miller: Let me actually ask you about another interesting piece of research. So we’re kind of moving from this very high level, like people with Ph.D.s doing science, to a job that’s becoming, I think, only more common, which is working in a nursing home. And there’s a recent paper that looks at robot adoption in Japan, a place that has (A) embraced robots and (B) has an aging population, sort of this upside-down pyramid [with] not that many young people, a lot of old people. I was fascinated by this … look at how robotics goes over in nursing homes. It goes over pretty well. I mean, it really seemed to help things in the nursing homes.

David Autor: I think this is a positive example. And why is that? Well, working in a nursing home, there’s a lot of good caretaking that goes on, but there are two very difficult parts of that work. One is it’s actually physically demanding and dangerous, and robots can help with lifting people in and out of showers, in and out of beds, and so on. And that’s really important. That’s how people get hurt in a care setting — [by] actually lifting patients. Another thing that machines are very good at, that people are not good at, is paying constant attention. So a machine that can monitor and let you know when something is going wrong is a very useful machine to have. You cannot have a person sitting by a patient’s bedside 24/7, but you certainly can have monitors for heart and for oxygen and so on. Medical settings are ones where [care workers] are overloaded with demands for their attention.

So having machines that can support some of the difficult tasks and some of the attention-demanding tasks, and allow people to focus where expertise is needed or where empathy is needed, that’s a very good setting. I am extremely optimistic about the potential for using AI well in health care. I say the potential, [but] I’m not as optimistic that we will use the potential as well as we could use the potential, but I think there’s enormous potential there.

Kara Miller: It seemed like in the nursing homes where robotics was used, people stayed longer, like the actual workers stayed longer. They did not tend to get laid off. They were just there, but they were sort of moved to other things, things that apparently made them happier because they didn’t walk out the door. They didn’t quit as often.

David Autor: It seems like it also improved patient care in that same setting. I think we sometimes think this is actually a surprise. So you and I, we were just speaking about the boring apocalypse where AI makes jobs sort of unbearable because they’re so dull. But there’s another study, which I know you’re familiar with by our colleague Danielle Li [David Sarnoff Professor of Management of Technology at MIT] and Erik Brynjolfsson [Jerry Yang and Akiko Yamazaki Professor and senior fellow at Stanford University] and Lindsey Raymond, who was a Ph.D. student here recently and is now at Microsoft Research. They looked at AI being used for customer support of a high-stakes setting with an enterprise software product. The AI didn’t support the customers. It supported the workers supporting the customers by suggesting responses in chats. One of the amazing things was, in addition to accelerating the rate at which workers converge toward expertise, it reduced the quit rate among workers substantially.

You might say, “Well, why is that?” This is a very high burnout job because there’s just a ton of emotional labor. It’s like a road rage phenomenon. People can be rather hostile, they’re free, they assume you’re an idiot, and they want their problem fixed. And so the chatbot actually surprisingly does a fair amount of the emotional labor. It says, “Oh, I know exactly how you feel. I’ve been there.” When of course it hasn’t. And so their sentiment analysis of the chats found that the level of hostility from customers to agents, and from agents to customers, actually both declined once this software was put in place. So it actually filtered out some of the tedium but also some of the difficulty of dealing with other people, and made the job more tolerable.

Kara Miller: I think one of the things that surprised me about the nursing home study was how effective physical technology was. This was not “we’re processing”; this was not an LLM. That’s not what this was. And I actually remember talking to Erik Brynjolfsson, who’s now at Stanford, and [MIT Sloan’s] Andrew McAfee. It must’ve been a dozen years ago. And they were just talking about, for example, how hard a job it is to be a waitress, both as a sort of physical and cognitive job. You have to figure out when are people done, when are they not done. You have to get between tables that are too close together and that sort of thing. And I wonder the degree to which you feel like physical jobs — a lot of jobs in this country are physical jobs, whether you’re talking HVAC, construction, plumbing. … Do you feel like technology is about to disrupt those jobs, or we are still a long way away from you know. …

David Autor: I think we’re closer than we used to be, but we’re still pretty far away. So these robotics and nursing homes, these were not basically robotic nurses walking around saying, “Can I help you? Would you like me to lift you out of bed? Would you like a coffee?” They’re very specialized machines that lift a person out of the bed or help them with toileting or help them with mobility or even communication. And certainly with monitoring, they’re not really robots in the sense that they don’t move around.

The physical world has always been more demanding for automation and computerization than the cognitive world because the physical world is so much more complicated than just symbolic reasoning. And so it’s always been — this is what’s called more of a paradox — easier to have a computer that can play a genius game of chess than to have one that can take out the garbage. The other thing about the physical world is there’s really no room for confabulation or mistakes. You can’t approximately empty a dishwasher or sort of care for a child or maybe or maybe not burn dinner.

You have to have very, very high levels of accuracy and confidence in a very uncertain environment. So there’s tremendous progress in robotics, and it’s coming, and large language models have sped this up. Surprisingly, they are applicable not just to language but to software, but also to translation, but also to sight, and also to physical motion. I asked a roboticist not very long ago at one of these frontier companies. I said, “Well, how many years before I can just get on the phone and call a robotic plumber who will come to my old house and take out the water heater and put in a new one?”

And he kind of turned white. He was like, “That’s not happening. Not in my lifetime.” Now, maybe it’ll happen in his lifetime. I don’t want to say that. But I do think that requires not just dexterity but all kinds of reasoning and judgments. It’s not at all a specialized activity. And that’s part of what makes humans so effective. We are so effective across many domains and so adaptable. We’re not especially good at anything, but we are able to do many, many things. And then we develop tools to make us really good at the things that we can’t do manually.

Kara Miller: Let me talk about an AI or just general tech gap that you pointed out to me before this conversation. [It’s] one that I have not heard talked much about, and that’s the gender gap. Research pretty consistently seems to show that men use AI more than women. That’s true at both older ages and younger ages if you look at kids in school. So in a class, if the teacher says, “There’s no AI in this class,” girls basically listen. Boys don’t. What are the consequences of that gap, do you think?

David Autor: It’s too early to know, but this is a robust finding across a number of countries that men use AI more than women. It’s also the case, by the way, that AI is used more and people are more positive about it in lower-income countries than in rich, English-speaking countries.

Kara Miller: OK. Interesting.

David Autor: That is a great finding of this Norwegian study that the part of the gender gap and usage is that when students are told not to use AI, women don’t, and men continue to. Is that problematic? In some sense, it is the case that — I’m going to sound very old-fashioned here — boys are drawn to toys. And so whenever there’s a novel gadget, they will tend to use it whether it’s useful or not. So in the era before computing, men were very into stereos. Who had the biggest audio file system? They always claimed it’s because they just loved music, but then they stopped buying that stuff once they started buying computers. So it’s not so obvious that was what that was all about, but in the long run, it is going to be very important for people to be facile with this tool. The more you use AI, the more you realize you just have to develop a knack for knowing when it can be helpful to you and when it’s not, and then to make it helpful to you.

So I do think it is not a favorable development that AI is currently used more by men than women. I think when schools start to teach effectively using AI, they should make sure that they give assignments that require everyone to do it. I mean, I know many teachers are afraid of AI, but ultimately it’s a tool we’re all going to have to use. It’s not like a calculator where you just punch in the numbers and get the right answer. It’s much harder to figure out how to be effective with it. And so we do want people to take advantage.

Kara Miller: It also is the case that employers say they want people who use AI. So when top female candidates aren’t using it that much, that matters. It’s also interesting to me, it echoes a little bit like in the 1980s [when] about a third of people getting computer science degrees were women. That plummeted as computer science became a more lucrative profession. It plummeted to about 18%, 20%. I wonder if there’s a little bit of repeat of history going on there.

David Autor: Everything you just said is absolutely correct, and I fully agree, but in the early 1980s, when office computing became very prevalent, actually women used computers more than men because they were doing the word processing and a lot of the clerical, administrative, and information processing tasks in organizations. We didn’t necessarily think that was a positive development, although it’s not clearly a negative one. But it is the case that women were at the frontier of computer scientists. They were some of the early famous computer scientists and also some of the people who did a lot of the programming at Los Alamos, as the Hidden Figures movie will tell you.

Kara Miller: Grace Hopper.

David Autor: Exactly. There even used to be a job called a computer. A computer was someone who did computations, and many of those computers were women. So it’s not a proud history, the way this has turned out, and I hope AI will not be the same. Now, I mean, there are famous pioneers in AI who are women, like Fei-Fei Li of Stanford is just one of many, many examples, but then the early history of computing also had that.

Kara Miller: Finally, I wonder when you think about AI — but you can also loop in robotics as we’ve talked about — what worries you the most here, and what gives you the most hope as you look out at the landscape of the research that’s coming in?

David Autor: Sure. So let me say, I’m going to answer as a labor economist rather [than] tell you I’m worried about biological weapons and so on and AI weaponization, but my worries there are no more interesting than anyone else’s. I would say what’s most worrisome is the potential for rapid displacement of human expertise. So expertise is the know-how to do some valuable task — coding an app or baking a loaf of bread or diagnosing a patient or replacing a hardwood floor. And sometimes the expertise can go from being very scarce, and therefore valuable, to being too cheap to meter because all of a sudden machines can do it. This is what could happen with some language translation.

Knowledge of how to navigate roads and streets used to be valuable, and now, of course, that information is available from your smartphone. And so, I worry not about us running out of jobs — this is not a concern I have — but certainly, people being displaced from expert work into nonexpert work, work that doesn’t require training or specialization. They say, “Well, what’s wrong with nonexpert work?” And there’s nothing wrong with nonexpert work, but it doesn’t pay well because so many people can do it. Generally, when people are displaced from expert work, they’re going to tend to move downward. If you’re doing the most well-paid thing, most expert thing you can do, in all likelihood that’s your job. That’s why you do that job. So when people are displaced from factory work, they end up in lower-paid services.

When people are displaced who used to be working as typesetters, they didn’t mostly become software engineers. They became something else that was probably less lucrative. So this is my biggest worry, the displacement and devaluation of expertise. I think that the greatest upside scenario is one where AI actually extends the relevance and reach of expertise, allows people with the right tools to go further with the knowledge that they have, and develop additional knowledge to do better.

I like to talk about the example of nurse practitioners, not an AI example per se. Nurse practitioners are registered nurses who have an additional master’s degree and additional practical training, and they can do things — they can diagnose, they can treat, and they can prescribe things that previously had been relegated to the realm of MDs exclusively. People with five or more years of education. And this is a social phenomenon and a very positive one led by women — women nurses who started fighting back in the 1960s for a broader scope of practice.

But at this point, they are strongly augmented by a bunch of technologies, both electronic medical records and diagnostic tools, and even software that looks for prescription drug interactions, and so on. And you can imagine a future where they have better tools; they could have a broader scope of practice and diagnose a larger set of diseases, recommend more treatments, to give more care. But you could imagine, similarly, more people being able to enter software development, more people being able to do some legal services, to do kitchen design, to do skill repair more effectively. And so the very good scenario is we would use AI to allow more people who are not at the frontier of education. So only 40% of U.S. workers have a four-year college degree. That’s a large number, but it’s not even close to the majority.

Allow those workers to do more valuable expert work. There’s where I think AI can potentially be a tool that allows people to level up or to do things that would be out of reach without these decision-making supports. That’s what I would hope to see more of. And let me be clear, we’re going to see all kinds of things. So there will not be one general case. There will be a heterogeneity of case — some cases where it’ll just totally displace experts, some places where it will just make a few people superstars, and other places where it’ll allow more people to do good quality work in domains that need lots of people. I hope we see a lot of that third case.

Kara Miller: David Autor is professor of economics at MIT. Thank you so much. Such a fascinating discussion.

David Autor: Thank you, Kara. It’s always a pleasure to speak with you.

Kara Miller: Before we go here, if you want to know more about the show or if you want to check out upcoming online courses from CSAIL, head to csail.mit.edu/podcast. And listeners to the show get a 10% discount on courses. Again, the website: csail.mit.edu/podcast. I’m Kara Miller. Our show is produced by Matt Purdy, with help from Andrew Zukowski and Audrey Woods. Join us again next time, and stay ahead of the curve.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

More in this series