How many jobs will AI eliminate? Nobody really knows, and here’s why – AOL.com

This post was originally published on this site.

March 7, 2025 at 5:30 AM
Over the last half-century, technological change didn’t eliminate work—it changed it. Will AI do the same? (Westend61—Getty Images)

The age of artificial intelligence has been full of predictions of mass technology-driven unemployment. A 2013 report by the Oxford Future of Humanity Institute posited that nearly half of U.S. employment at the time was “potentially automatable” over the next “decade or two.” A decade later, however, there were 17 million more jobs in the U.S.

The advance of generative AI has unsurprisingly breathed new life into such alarmist projections. The IMFrecently declared 40% of jobs are “exposed” globally; Goldman Sachs put 300 million jobs at risk of being “lost or degraded”” and the Pew Research Center estimated that 19% of U.S. workers have jobs in the “most exposed to AI” category.

Are we on the cusp of a global employment apocalypse? Anxieties about “technological unemployment,” as John Maynard Keynes dubbed it in 1930, go way back. In the 1960s these fears led the U.S. government to convene a Commission on Technology, Automation, and Economic Progress, chaired by the eminent economist Robert Solow. Contrary to much fearmongering at the dawn of the IT revolution, the Commission concluded that “[t]echnology eliminates jobs, but not work.” So far, the facts have corroborated that thesis: The U.S. economy had 2.7 times as many jobs in 2024 as it did in 1964—with higher labor force participation (62.6% vs. 58.7%), lower unemployment (4% vs. 5.2%), and three times more output per hour worked. Over the last half-century, technological change didn’t eliminate work—it changed it.

But will this also be the case in the new age of AI? Nobody knows for certain. There are still too many unknowns to take forecasts of employment doom too seriously. Dissecting today’s “employment exposure” studies helps reveal the true extent of those uncertainties in the age of (especially generative) AI. Those uncertainties are the pace, extent, and depth of business adoption; the effect of higher labor productivity on the demand for services; and the timing and geographic distribution of potential job losses.

The gulf between ‘exposure’ and actual displacement

Estimates of “employment exposure”—the prevailing euphemism for projections of technological unemployment—tend to adhere to the following logic. First, determine the tasks that can be automated with a given technology; then identify occupations that include those automatable tasks before, finally, calculating the sum of all jobs in occupations that reach a predefined threshold of automatability. This line of reasoning appears plausible enough—until one takes notice of the complete neglect of the microeconomics of the firm as the crucial link between the potential of any technology and its actual economic impact.

Technology adoption is neither free nor frictionless: There must always be a business case for technological change. This fact reveals, at the outset, a gap between the levels of automation that are technologically possible and the degree of automation that is economically rational for firms to pursue.

In one of the most compelling empirical studies of this all-important gap, a group of MIT economists recently estimated that while 36% of U.S. private sector jobs were technically “exposed” to automation through computer vision (i.e., involved at least one task that could be so automated), it would only make economic sense for firms to pursue automation for 8% of all private sector jobs—just a quarter of those jobs labelled “exposed.” Three quarters of the “exposure” estimate turn out to be illusory once the logic of firm-level decision making is considered, exposing the flawed logic of overly simplistic, “micro-to-macro” extrapolations.

When firms evaluate the potential returns of technological adoption, they look closely at two interrelated factors: the cost of labor and the competitive environment. The more competition they face, and the more limited the access to qualified labor—due to cost or labor market tightness—the stronger the business case for tech investments. But these factors can vary dramatically from one country to the next. Failing to consider the firm often leads forecasters to apply identical extrapolations across national economies. Even in settings with comparable labor costs, however, a firm’s propensity to automate may be constrained by other rigidities specific to individual labor markets, like legal regimes that make job cuts difficult, as can be the case in Europe.

Putting the firm back at the center of analysis means that one should be cautious about interpreting evidence of individual workers’ generative AI adoption, as it doesn’t directly speak to the extent to which businesses have embarked upon the difficult task of reinventing themselves around the technology. “Automating processes with software is HARD,” in the words of Steven Sinofsky, former president of the Windows division at Microsoft. Widespread use of genAI by individual workers may have some positive productivity effects, but workers don’t create or eliminate their own jobs, their employer does. That’s why the prospects of lower employment need to be assessed against patterns of institutional adoption by employers.

The mystery of price elasticity of demand

Suppose it were true that frighteningly large numbers of jobs were at real risk of automation. In that scenario, we would expect considerable increases in labor productivity and hence lower cost, lower prices, and, as we have argued elsewhere, a boost to consumers’ real incomes. Yet contrary to the “job exposure” narratives, sectors with higher rates of automation won’t necessarily experience imminent declines in employment. In fact, they can come to employ more people for long periods of time as they become less employment-intensive (i.e., employing fewer workers per unit of output).

Economist James Bessen studied this phenomenon—which he describes as the “inverted U pattern”—in U.S. manufacturing between the early 1800s and the 2010s. In areas such as textile, iron and steel, and motor vehicle production, automating technologies led to steep increases in labor productivity. But instead of shedding jobs, sectoral employment grew for decades—because higher productivity translated into price declines that boosted demand. Faced with considerably lower prices, consumers spent so much more on clothes and cars that, while requiring fewer workers on a per-unit basis, manufacturers actually employed more laborers in aggregate.

This pattern points to one of the most important and elusive questions concerning the labor effects of generative AI. We’ve argued before that the novelty of generative AI lies in its ability to increase white collar workers’ productivity by automating tasks that are often non-routine and closer to the creative “core” of knowledge work. It should then be expected to reduce the cost and price for sizeable segments of the service sector. What we don’t know is how price-elastic demand for many services is, which makes it nearly impossible to predict the net employment effects. If the average cost of legal services, for example, were to decrease by a factor of 10 thanks to genAI-powered automation of legal research, summarization, and drafting, how much more demand for such services would there be as a result? Might this and similar sectors go through their own “inverted U pattern” of employment where labor productivity increases but so does employment?

It is understandable that most “job exposure” studies don’t even attempt to estimate how demand will respond to the assimilation of generative AI into numerous occupations. But that’s why it’s best to understand these studies for what they truly are: the theoretical upper limit of automation-driven labor productivity increases. These extrapolations are made without regard for if, when, or how firms realize that productivity potential, and with incomplete information on the interaction between changing prices and demand necessary to assess net employment effects.

Not whether, but when and how

Of course, there is no denying that numerous occupations will shrink over time as AI adoption progresses. There are already signs of a decline in the hiring of software developers in the U.S., which many attribute to genAI’s coding proficiency. But, again, this is not a new economic phenomenon: Economies dynamically shed and create occupations all the time. The sort of “end state” picture that exposure studies paint does not tell us what matters most: where and at what pace the change in employment will occur. To borrow Robert Solow’s expression, what’s needed is a compass to navigate the “elusive macroeconomics of the medium run.”

Focusing on the dynamics of change is all the more necessary when technology itself is a moving target. The sheer pace of AI development makes it nearly impossible to pin down how much its present state can or cannot automate—before that state of play is no longer relevant. That makes it even more urgent to understand the depth, and not just breadth, of business adoption. The 2024 American Business Survey presented a sobering statistic: Only 5% of all U.S. companies use AI for the production of goods and services. While the figures are considerably higher among larger corporations, we still don’t know how many have really moved beyond tests, pilots, and local deployment to undertake end-to-end process redesign, let alone business model reinvention.

None of this is to say that workers and business leaders should be complacent.  Businesses that reinvent themselves with AI will get ahead of competitors, and workers who understand the changing landscape of critical skills will be most adaptable as the technology continues to evolve. The focus on aggregate “employment exposure” is a distraction from the more pressing questions of which specific occupations face imminent disruption (and where and how quickly) and how firms are adapting to the new technological potential within their reach.

***

Back to the lessons from recent history: We know that economies are remarkably creative over the long run, that new occupations have never ceased to emerge (because there is no “lump sum” of labor), and that forecasters tend to get things wrong when attempting to prognosticate what work will look like on the other side of a major technological transformation. AI may be an entirely different ballgame, and perhaps this time around the forecasters of “exposure” turn out to be right. But we remain unconvinced. There simply are too many critical “known unknowns.”

A final word for policy makers, who are often the primary audiences of the “exposure” narratives. It’s been more than 30 years since Harvard economist Michael Porter argued that “[a] nation’s competitiveness depends on the capacity of its industry to innovate and upgrade.” There is no doubt that the “upgrade” with AI will significantly shake jobs. But, if like Solow, we believe technology destroys jobs but not work, then policy should aim to protect people, not job descriptions.

Read other Fortune columns by François Candelon.

François Candelon is a partner at private equity firm Seven2 and the former global director of the BCG Henderson Institute

David Zuluaga MartĂ­nez is a partner at Boston Consulting Group and an ambassador at the BCG Henderson Institute

Etienne Cavin is a consultant at Boston Consulting Group and an ambassador at the BCG Henderson Institute.

This story was originally featured on Fortune.com