When will we see mass adoption of gen AI? – McKinsey

This post was originally published on this site.

Will generative AI live up to its hype? On this episode of At the Edge, tech visionaries Navin Chaddha, managing partner at Mayfield Fund; Kiran Prasad, CEO and cofounder of Big Basin Labs; and Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb, join guest host and McKinsey Senior Partner Brian Gregg. They talk about the inevitability of an AI-supported world and ways businesses can leverage AI’s astonishing capabilities while managing its risks.

The following transcript has been edited for clarity and length. For more conversations on cutting-edge technology, follow the series on your preferred podcast platform.

AI adoption: Not if, but when

Brian Gregg: Between you three, we have the investor, the product manager, and the trust and safety thought leader. I’m going to get us started with a question that’s rumbling around all of Silicon Valley and beyond. A trillion dollars later, is this so-called AI revolution real?

Naba Banerjee: When I first heard about ChatGPT, I was one of the first people to get the app and pay for it. And then I used it like crazy. We were also going to replace all our customer support agents with AI. It was going to change the world in record time. And here we are—it’s not happened yet. I am a little frustrated. It’s taking too long. But I’m hopeful about the future.

Kiran Prasad: AI has a bunch of different parts. When I was at LinkedIn and then at Nextdoor, we used AI to rank a user’s feed. So AI is already having a massive business impact.

The big, new thing is gen AI. It’s only been a year and a half to two years. You’ve got to give it some time before it starts to get adopted. People must learn how to use it, and you have to build apps on top of it.

If you think about the iPhone when it first came out in 2007, it took one year before the App Store was available, and then another three years before Uber. It was almost six to eight years before you really got to a point where you could say, “Oh, iPhone is great.”

Now iPhone’s made Apple into a trillion-dollar company, where historically up to 60 to 70 percent of the revenues are primarily from that one device. I feel like we’re on that same path. It’s inevitable. Over the next eight years, AI and AI agents will be the future.

Naba Banerjee: But the first version of iPhone did not have this much hype. Whereas with AI, there was so much hype. And now, it feels like it’s taking too long.

Navin Chaddha: The first thing is that AI has been around for 60, 70 years, before any of us were born. I look at it as an evolution. There’s hype with any new technology, especially for one we’re all interested in. Gen AI is only two years old from the launch of ChatGPT.

For AI-based applications, there was more hype, because ten years back, the technology couldn’t do things that were advertised. Ten years later, a lot of the things that were talked about are going to become a reality. With semiconductors, which Silicon Valley was built on, per Moore’s law, as predicted by Gordon Moore, the cofounder of Intel, processor speeds double every two years. I expect the impact of gen AI is going to be four times every two years.

Per Moore’s law, as predicted by Gordon Moore, the cofounder of Intel, processor speeds double every two years. I expect the impact of gen AI is going to be four times every two years.

Navin Chaddha, managing partner at Mayfield Fund

Brian Gregg: What does the investor mind and balanced view say about how fast this is coming?

Navin Chaddha: It depends on the use case. If you look at consumers and prosumers, where on the other end you don’t have bureaucracy buying this stuff, it’s going to happen very quickly. If you look at the adoption of ChatGPT, it had 100 million monthly active users in just two months.

For Instagram, it took two to three years for adoption. For Facebook [Meta], about five years. And Google, several years. With each innovation, adoption took about half the time. If you’re selling to enterprise IT and there’s a human buyer, that’s where friction will slow things down. It will also depend upon whether you are taking human jobs in the enterprise, or you’re filling jobs that humans don’t want to do or can’t do, or if there’s a shortage of talent.

I’m very bullish on fast adoption among consumers, prosumers, developers, and microbusinesses. In an enterprise, there’s often friction. There’s an IT buyer. There’s a chief legal officer, with concerns about privacy data, so then training can’t be done.

A big issue is that companies often want to host their data in-house. Then when you ask them what their top two problems are, they say, “I don’t have a business case” and “I don’t have talent to implement it.”

What is the adoption rate for companies?

Brian Gregg: Give us a different view from the front line, Kiran, and maybe Naba, too. You’re starting your company right now, Kiran. How fast is this AI adoption happening?

Kiran Prasad: For a start-up like mine, it’s happening now. If you look at all of the tools we’re using, everything is AI. I probably use AI 300 times a day, easily, and not just for the coding side of it. It’s to build our logo, build our website, build our marketing materials, and build our customer support site.

Everything is all AI first. But I still think, like with any new technology, it’s going to take a long time for adoption. When the Google search engine and AltaVista first came out, there were companies you could call to do searches for you. They would do a search on Google and then give you the answer. That was because people did not know how to use things like “and,” “or,” and “site colon.” It took years before people learned how to use prompts.

So I don’t think the tech is far away. Users’ ability to understand how to engage with an agent and use it to accomplish things will just take time. The solutions will be there, but the adoption rate will be potentially lower.

Brian Gregg: Naba, take us back to 2020, when you were coming into Airbnb in the trust and safety role. How did AI influence what you did and how you did it?

Naba Banerjee: I’ll give the moral of that story before I give the story, which is that the biggest mistake we make is thinking about AI for the sake of AI. What will never go away is what humans do really well, which is articulate the problem clearly.

When I joined as the head of trust and safety at Airbnb, it was a really difficult problem. The world had gone into lockdown. Bars and hotels had shut down. And teenagers were throwing parties in Airbnb rentals. I remember just sitting down and not having a clue as to how to even start.

What saved me was that I had a group of cross-functional experts to go to for advice, including police chiefs, our communications partners, our designers, and our developers. We knew we needed intelligence that could keep up with the trends in the world.

That’s when we built the first AI model, rolled it out in America, and then the whole world. Today, party incidents are 55 percent less compared with when we started. So that’s something we should never forget: It’s never AI for the sake of AI, but to solve problems.

Part human, part computer?

Brian Gregg: Let’s flash-forward a little bit. Navin, if you were to look ahead by two or three years, and let’s say the adoption curve is what you say it is, what do the institutions of 2028 look like? Are they half machines, half people? What’s your view of that?

Navin Chaddha: We believe every human is going to have a digital companion, and we call them AI teammates. Our strong belief is that these AI teammates and humans will work together so that humans can work at their exponential potential, what I call “human squared.”

What it means is that AI will have to do more than automate tasks and accelerate productivity. Essentially, we have to start thinking about how AI can augment human capabilities. How does AI help me amplify my creativity? Then it’s really a teammate.

It’s not assisted intelligence with a copilot that you instruct, “Go do this task.” How do we get better together, so a future organization can have digital workers alongside human workers? The organization of the future will be hybrid. The CEOs and executives who endorse it will get on the other side, and people who don’t will end up becoming dinosaurs. This is what happened with the internet and e-business. If you don’t have a mobile app, you know where you go. So that’s what will happen.

Kiran Prasad: This is a critical thing. The agent or teammate approach versus the copilot approach. I’m a believer in the agent approach. Think about it like this: If you’re going to write a book, most people would start by opening Google Docs or [Microsoft] Word. You’ll probably get spell-check and grammar check to help you. There’s a little AI that’s kind of helping you write the book.

You can think about the agentic world like having a ghostwriter. If you’re going to tell your story, you go to the ghostwriter. The ghostwriter writes the book. You then provide editorial feedback on whether the book is good and which parts to fix. So it’s this idea that you’re going to have a teammate, somebody who’s going to do the work, and you’re going to give it a bit of direction. That’s the future.

You can think about the agentic world like having a ghostwriter.

Kiran Prasad, CEO and cofounder of Big Basin Labs

Reexamining the business model

Naba Banerjee: I love that collaboration, Navin and Kiran. As an operator, every year we would go to our CFO saying, “I don’t have enough money. I want more resources. I want more engineers to do the work.” They would say, “But, Naba, now you have AI assistants or copilots. Why do you need people anymore?”

But if we go back and say, “AI will help our engineers be even more productive,” they’ll say, “That means I have to pay the engineers and pay for AI? You’re going to make me spend double. How much time will it take for you to be doubly productive?” I want to imagine this beautiful world [of exponential productivity with AI], but when?

Navin Chaddha: We start from the fringes and look at jobs that can’t hire talent. Look at DevOps engineers, ITOps engineers, security engineers, and chip engineers. Same thing as happened with IT outsourcing or manufacturing outsourcing. Don’t take the high end of the knowledge work and replace those. Go to the fringes.

Second, go after things humans are not good at. Sifting through case law, preparing for litigation—can you boil it down to 90 percent that is irrelevant? Ten percent is what you give to humans.

Kiran Prasad: The other thing is that if CFOs are not using AI, they can’t understand what it means. I recently tried to raise funding and needed legal advice, so I set up five different AI lawyers with different personalities. I uploaded the contract, and they diagnosed and argued with each other about what the pros and cons were.

This is what I mean. We’re in those early Google days where people are, like, “That doesn’t even sound real.” I think your CFO just doesn’t understand AI yet.

Navin Chaddha: One way to solve the budget problem is if people who are on agentic architectures don’t charge for the number of hours or per seat. Instead, they charge for the work they do and the outcomes they create.

It’s a complete change in the business model. The same happened with perpetual license. First it was, “Pay me up front for five years.” Then the next company wanted to be paid monthly. And then cloud compute happened, where you pay as you use, like you do for electricity.

So, get these digital workers. They’re off most of the time. They answer calls. They reconcile AR [accounts receivable]. They file tax returns. Then you pay for them [the digital workers]. The tech is getting there, but the workflow isn’t there, because they need enough practice to get better.

In an enterprise setting, there’s one more thing I don’t like. It’s the amount of training that is required on closed data. Training on open data on the internet is much easier to create a scalable service. But I have custom data, which is complicated. That’s why you have to go to the fringes, which requires business model innovation.

Kiran Prasad: The mapping shift is like Uber. If you originally wanted to have a driver, you had to make enough money to have a driver and pay them full time. Then Uber made drivers easily accessible. That did not mean everybody got rid of their cars.

The future CEO in the age of AI

Brian Gregg: Naba, in a world where you have half machines and half humans, what does the leadership team of tomorrow look like? How does a CEO and her or his team operate in this hybrid world?

Naba Banerjee: I think it will take away a lot of the fear associated with leadership. People who want to start their own companies, or who want to lead companies, or be a senior leader, they think they have to be this person of exceptional talent with very creative vision and make the best decisions all the time.

They will be able to use AI to say, “Simulate these five scenarios for me and give me all of my risk-versus-benefit numbers. Help me understand if I’m going to get sued or not.” Exactly like what you are doing, Kiran. They can ask AI to come up with creative ideas and challenge each other.

Everyone cannot be exceptional at everything, but everyone is exceptional in at least one thing. But those other areas of your personality that may have kept you behind, now you can push forward with AI.

But those other areas of your personality that may have kept you behind, now you can push forward with AI.

Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb

We will probably see many more leaders emerge. On the flip side, it’ll get harder to distinguish yourself because suddenly it’s an equalizer. Everyone has the same resources available. So that’s the conundrum that, though I’m not a fortune teller, I’m very excited to see.

Brian Gregg: Many of today’s CEOs followed a certain track, such as an MBA or a graduate degree, then a job usually in a commercial function like marketing and sales, and then worked their way up. Kiran, what does this CEO of the future look like? Is it the same pathway with a few tweaks? You’re playing the role right now.

Kiran Prasad: It’s the same pathway, but with more than a few tweaks. Part of what you do as you get into larger and larger leadership roles is you get really effective at understanding strategically where you want to go and then delegating tasks.

In an agentic world, you will be able to choose which tasks to delegate to an employee versus delegating to an agent. But you still need somebody who’s setting the strategy. What will continue to be an even more important skill is communication.

How effectively and concisely can you convey what you’re trying to accomplish to a person versus an agent? And what happens as it permeates through an organization is that it typically dissolves. In an agentic world, you’re going to be able to maintain fidelity going from an agent to an agent that’s trying to accomplish things. The more effective you are, the game of telephone will be more precise.

You have to be able to predict where the future’s going and guide strategy more effectively. So the whole “I’m going to just A/B test it” baloney is going to be less.

Brian Gregg: Navin, do you agree with this version of the CEO?

Navin Chaddha: I look at it as the CEO will always have to be raising money, because without money, you can’t do anything. Second, they’re in the business of mobilizing resources. This time, it won’t just be human talent. It’ll also be AI teammates. Then you have to make a decision. But smart CEOs, like athletes, surround themselves with coaches. And this time around, I’m going to have a lot of digital coaches who can improve my “serve.” CEOs have a tough time giving feedback. I’ll have a candor coach. They might be afraid of speaking. The best ones demonstrate vulnerability. They have to maintain a persona.

But with a digital teammate, it’s all confidential. My only input to AI-native CEOs is to get somebody with a fresh mind as their chief of staff. Now the question is: Will it be a digital teammate or a human teammate? Maybe it’s a combination.

Kiran Prasad: My view is that it’s a digital teammate. If you look right now at what is the biggest adoption for AI beyond ChatGPT, two other ones are Character.AI and Replika. They are effectively psychiatrists.

Naba Banerjee: AI therapists.

Kiran Prasad:  Weirdly enough, people keep saying, “I don’t know if I trust AI.” But the number-one use case that seems to be working is the one where they have to trust the AI, which is insane!

AI as tool or takeover?

Brian Gregg: If we’re talking about 2028, when half the jobs are done by these digital teammates, what is the downside effect on humanity, on the employee base, and on society?

Navin Chaddha: I think humans are smart. I look at AI as yet another horse. It’s yet another tool. Humans will figure out how to ride it the way they did PCs and mobile. We’ll just get better. And when this productivity, amplification, and implementation comes, more revenues, more profitability, and more jobs get created. So essentially, when GDP growth happens, now it just turns out that the population can’t keep up, so some of them will be AI teammates. So I’m very bullish.

Every time a tech wave happens, humans win. Tech is the great equalizer. When offshoring happened, people thought India would take away all US jobs, but the US got richer and richer with globalization.

Naba Banerjee: I feel like I have to balance that view. The trust and safety world exposed me to another part of humanity that at times I wish I hadn’t seen. I know different marketplaces, dating sites, are trying to create an environment where humans can meet each other.

After COVID, so many people are meeting for the first time digitally. And it is always scary for humans. Stranger danger is still considered to be one of the top fears that prospective hosts on Airbnb have. About 60 percent of prospective guests say they’re scared of being scammed.

It’s not true. Very few incidents actually happen, but this is a fear that humans fundamentally have. With synthetically generated humans through AI, it’s so easy to re-create voices of people, digital twins, and fake IDs.

We are seeing the way we have typically kept communities safe—trust and safety and risk teams and what they have done—that all those defenses are failing. We are not ready for the world that is coming. There’s also a lot of bias in the data that is being used to train these [synthetically generated] humans.

So, yes, it feels like red tape when the privacy team and the antidiscrimination team say, “You cannot launch this model. We have to watch the data.” I used to push against these teams, but I realized that it is happening.

For example, if you search for “makeup,” the algorithm shows only makeup for White women. Where is the diversity? These are not necessarily gen AI problems; we’ve had these in society. We should be bullish about this. We should be solving for this. We need to go in with eyes wide open, that the same AI is in the hands of good and bad actors. We have to constantly think about the two sides of the coin.