This post was originally published on this site.
At Charter, we’ve focused on understanding the implications of AI for the value of expertise, junior workers, and the shape of organizations. A new analysis by the Burning Glass Institute and the Harvard Business School Project on Managing the Future of Work covers all three. It estimates that AI could affect nearly 50 million US jobs in the coming years in one of two ways. In some occupations, entry-level pathways will narrow; in others, barriers to entry will decrease, making those roles accessible to a broader range of workers.
We spoke with study co-author Joseph Fuller, a professor of management practice at Harvard Business School and co-founder of the consulting firm Monitor Group (now Monitor Deloitte), about how AI will reshape career ladders and expertise. Here’s that conversation, edited for length and clarity:
Your new research focuses on the impact of generative AI on two different sets of jobs—one where AI will open new doors for workers and one where it will close doors for workers. Can you explain those two different categories?
The jobs that will be closed off are ones in which a lot of the tasks for that entry-level worker are highly susceptible to being displaced largely or in their entirety by AI. Since that work is now undertaken by technology, it’s just like a hitter having fewer at-bats. There will just be fewer of those positions available. It will be harder to get on the bottom of the escalator.
Similarly, the ability to do certain tasks, often complex cognitive tasks, is a big barrier to people getting a job. So if suddenly there were universally effective technological tools that did those tasks, which required skills that are hard to master or require esoteric advanced education, suddenly the basis for selection for other jobs will be less exacting. So more people will be, at least prima facie, qualified and might get selected more on the skills that have now moved up the hierarchy, because the ones directly above them have just been displaced by technology.
Let me use an illustration. Let’s say I’m hiring a contract lawyer. Now I’ve got this AI system which has a definitive understanding of contract law in your jurisdiction. It knows all your contracting rules and has access to every contract the company has signed since you’ve started digitizing those documents and can contrast this consulting contract you want to sign with this consulting firm with every other consulting contract.
Do I care about the person who got the highest grade in the contract course? Maybe not because they got a high grade by having presumably the highest level of technical knowledge about it. But the AI is more knowledgeable than that person’s ever going to be. Maybe now I’m looking for somebody who, when I interview them, doesn’t spend most of their time looking at their shoes, or is more articulate, or is more engaging, or [who] also speaks a language, or has a different type of major—let’s say an English major and I ask them for a writing sample, and it’s much better than the student with a higher grade in the contracts course.
What’s left to distinguish yourself in terms of being selected for a job, that changes. And what’s required of the entry-level worker reconfigures.
How would you describe the difference in the type of expertise demanded in those two sets of jobs?
The jobs that get more opened up are ones in which those hard-to-get skills require a level of accomplishment that is less available to any cohort of prospective workers—since that’s gone away, I can choose on other bases, like I was saying with the contract lawyer.
For the jobs that are going to be more constrained, right now, a major hiring criteria and a major amount of the work done by entry-level people is work that is probably either routine cognitive work or non-routine cognitive work, but it really fits well with what we call ‘in-frontier tasks.’ An in-frontier task is one that generative AI, with its current capabilities, is really well configured to do.
There’s another attribute [of the occupations for which the number of entry-level jobs could decrease] where someone who has multiple years of experience in a role makes a lot more than someone who’s got the exact same role but has less experience. That means that, assuming the pricing mechanism for talent is rational, that person’s creating more value, which means what they’ve learned along the way makes them more effective. That’s an attribute of jobs that are about 12% of the economy—where the fact that the open part of the funnel gets pinched gets pretty dangerous for the employer.
I just did an interesting project with a professional services firm, which has different rates of accession to partnership. They have a category of ‘super fast movers’ that’s almost one in 200. It’s a very rare thing. The tasks of the entry-level analysts in a firm like that can probably be automated or augmented to the tune of 40% to 45% of the hours. So if I used to hire 200 to get that extraordinary, fast-promotion person, now I’m hiring 110. How do I know that my one-out-of-200 fast mover will get hired into the 110?
I asked them to tell me about the attributes of the fast movers. When they looked at it, they were often highly unusual hires. They weren’t the business major at a highly selective college who had a 4.0 and had two internships and has two parents with postgraduate degrees. They left the classics program at an Ivy League university, and upon completing their PhD, applied for this job and happened to get interviewed by an office head who was a classics major at a different university.
The people who were the one in 200 were these outliers. They weren’t what the AI would have picked if you gave it your hiring rules and threw resumes into it. You’d get yet another kid from Wharton.
If companies have fewer entry-level roles, how can they ensure they have the talent pipeline they need to develop the next set of senior leaders?
Good question. Anything that has historically relied on a tutorial model—companies are going to have to understand how they can replicate that. How will they begin to understand what type of knowledge was being learned experientially and can they replicate those experiences through simulation? Can they find tools to capture that implicit knowledge of the more experienced person, if only to end up as a dialogue partner with a less experienced person?
What we’re going to see is that, over time, some of the most productive applications are your personal tutor or your personal chief of staff that’s a bot that knows a lot about your job, company, responsibilities, and what you’re being measured on. But it’s not waiting for you to ask it questions. It’s asking you questions.
‘Hey Jacob, when I was looking at your diary, it looked like you’ve been spending no time on that non-urgent but really important task that you identified with your boss and you’re spending an awful lot of time on email. Are you spending enough time on that? How can I help you get going on that? Would it help if I started doing some research for you? I’ve looked at your calendar and here are a bunch of time periods I recommend you block out so you can spend time on this.’
What’s your elevator pitch for what AI does to the value of expertise?
Current AI increases the premium on expertise because the human being is more likely to develop and ask more precise and more provocative prompts and much more likely to detect what I prefer to call confabulations, not hallucinations.
I would add that over time, one of the genuinely interesting questions is, as generative AI becomes capable of discerning the implications of the expertise of human beings, will it be able to not only replicate that expertise but go beyond it? Meaning that essentially experts will be training bot replacements for their expertise.
Read our full interview with Joseph Fuller for more on AI and jobs.