This post was originally published on this site.
This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.
Charlotte Gartenberg: Welcome to Tech News Briefing. It’s Thursday, March 6th. I’m Charlotte Gartenberg for The Wall Street Journal. Artificial intelligence tools are being used by companies across industries. One sector that’s seeing big changes? Coding. WSJ reporter Isabelle Bousquette tells us how generative AI is transforming coding development jobs and why it could be paving the way for leaner teams. Then, Ilya Sutskever, former chief scientist at OpenAI, is one of the most revered AI researchers in the industry. His new startup is already worth $30 billion. But what does his secretive company, Safe Superintelligence, do? Our reporter Berber Jin shares what we know about the startup so far.
But first, AI coding tools can automate large portions of code development. But will this tech replace human workers? For that answer, we’re talking to our reporter Isabelle Bousquette, who covers enterprise tech. All right, Isabelle, I know there’s been some panic over AI taking over jobs, but that’s not quite what’s happening here. AI can’t just write the entire code for you, can it?
Isabelle Bousquette: That’s right. You probably wouldn’t want to leave that job up to AI, at least quite yet. But what we’re finding are these tools are actually doing a pretty good job of being assistants and helping coders and developers essentially get a lot more code written at a much faster rate than they could work before. These teams are being a lot more efficient. We’re seeing companies citing typically double-digit efficiency numbers, anywhere from 10 to 20 to 30% efficiency. It’s less of a question of, “Oh, your entire development team is gone tomorrow,” and more of a question of, “Wow, your development team is doing a lot more work than they ever could in the past. And what does that mean?”
Charlotte Gartenberg: How does this AI coding assist make it faster?
Isabelle Bousquette: Essentially, these AI tools are trained on a lot, a lot, a lot of code. Typically, they do a really good job if there’s standard boilerplate, busy-work type of code that you might have to write, code that’s a little more commoditized. A lot of that can just be automated, and you would do that by essentially prompting the model. You would go in and explain in English what you need it to do. Sometimes it can almost work like an autocomplete scenario. You can sort of also think about the coding assistants like that, anticipating what sort of might need to come next and suggesting that.
Charlotte Gartenberg: So, how widespread is the use of AI coding tools right now?
Isabelle Bousquette: It’s pretty widespread. Most big companies are either using some iteration of these or thinking about or exploring them. A couple years ago, when ChatGPT sort of propelled this idea of generative AI into the public consciousness and all these companies were scrambling to try to figure out what they could do with AI, they found that this coding use case was actually one of the earliest use cases that could deliver clear efficiencies. One of the most popular tools here is GitHub Copilot, which is owned by Microsoft. They said in their earnings that they’ve been adopted by more than 77,000 organizations. So, pretty widespread. But there are plenty of other tools out there as well.
Charlotte Gartenberg: So, how is this changing how companies are looking for talent?
Isabelle Bousquette: There’s a lot of really interesting dynamics at play here. The first question of, “Are jobs going to disappear?” Companies are really hesitant to say, “Yes, jobs are going to disappear.” But what they are willing to say is, “We’re doing more with smaller teams.” It’s also important to acknowledge that these coding tools have room to grow. They tend to be better at generating new code when you’re in a position where you need to write essentially net new code than they are at migrating or updating existing code, which is something that big legacy companies end up doing a lot of, just sort of maintaining their existing code. The jobs of the developers will essentially change. Now that they have to spend less and less time sitting writing code, they’ll be able to spend more time thinking about how to use the AI tools, how to prompt the AI tools. There are some really interesting workforce dynamic changes happening here.
Charlotte Gartenberg: That was WSJ reporter Isabelle Bousquette. Coming up, AI researcher Ilya Sutskever’s new startup has already raised $30 billion, thanks to the founder’s reputation. What we know so far about the secretive company, Safe Superintelligence, after the break. Ilya Sutskever is one of the most revered researchers in the AI industry. He co-founded OpenAI in 2015 with Sam Altman and Elon Musk. He was the company’s former chief scientist, and he helped develop the language model technology that underpinned ChatGPT. But Sutskever left OpenAI last year. His new startup, Safe Superintelligence, is already worth $30 billion, making it one of the most valuable companies in tech. Our reporter Berber Jin covers startups and venture capital, and he’s here now with more on Sutskever and his secretive startup. And before we get into it, we should note that News Corp, owner of The Wall Street Journal, has a content licensing partnership with OpenAI. So, Berber, why did Ilya Sutskever leave OpenAI last year?
Berber Jin: Sutskever was one of the board members who fired Sam Altman, famously, in November 2023. At the time, he had grown distrustful of Altman. And the two of them were also fighting over how to allocate OpenAI’s scarce computing resources. So, Sutskever was kind of a more pure research, technical mind, so he really wanted OpenAI’s computing power to be devoted towards creating safe superintelligence, devoting everything towards creating the most powerful AI possible in the lab. And Altman was much more commercially focused. After ChatGPT, he wanted to grow OpenAI’s revenue. He wanted to release products. So, they were clashing a little bit over the direction of the company.
And at the same time, there were all of these interpersonal tensions that grew, where Sutskever felt Altman wasn’t being completely truthful in his dealings with the board. And he, very famously, was the one who actually told Altman to click on a Google Meet wherein Altman would get fired. And that triggered the four-day crisis within the company where Altman was ultimately reinstated. After that, Sutskever, he essentially disappeared from the company. It was a very difficult experience for him because he essentially recanted and said he regretted firing Altman. There was a lot of pressure for him to return to the company, but he was feeling very conflicted. And he ultimately decided last May to leave OpenAI to co-found his own startup, Safe Superintelligence.
Charlotte Gartenberg: So, what’s Safe Superintelligence?
Berber Jin: Safe Superintelligence is what Sutskever calls the world’s first straight-shot lab devoted to creating superintelligence, this idea of an AI that basically surpasses humans at every task possible. He released the manifesto for the lab when he co-founded it. That was very sparse on details. But what he said in that manifesto was that Safe Superintelligence, the startup, would be devoting all of its resources and energy towards creating superintelligence. He said they wouldn’t release products. They wouldn’t focus on growing revenue. Any attribute of a fast-growing startup, wanting to scale the business, get customers, he essentially said, “No, we’re not going to focus on any of that. We want to basically build the world’s most powerful AI.” That’s essentially all we know about the startup and what it’s planning to do.
Charlotte Gartenberg: Do we know anything about how he plans for Safe Superintelligence to make money?
Berber Jin: So, what Sutskever has said about Safe Superintelligence is he’s discovered what he calls a different mountain to climb when it comes to developing and improving AI models. So, right now, all the leading labs, including OpenAI and Google and Anthropic, they essentially are saying, the way to build more powerful AI is to dump more computing power and dump more data to train these models. Sutskever has said that that thesis is broken. And he’s alluded to having discovered something else that could sort of hold the key to developing AI faster than anyone else. But he’s keeping it very close to the chest. He’s not even telling some of his investors what that approach is.
That’s the big question behind his startup, is, have they discovered something new that no one else has discovered? For example, a much cheaper way to develop advanced AI. And if that’s the case, it could essentially restack the entire pecking order in the AI race. Let’s say they discover something that OpenAI isn’t able to discover or Google is not able to discover. All of a sudden, those companies might be left in the dust. OpenAI has a $300 billion valuation. All of that is at risk if Sutskever has actually caught on to something that no one else has caught on to.
Charlotte Gartenberg: Okay. It’s a secretive company. But it has some big backers. Who are they?
Berber Jin: A lot of top Silicon Valley investors have backed the company. Sequoia Capital, Andreessen Horowitz, Greenoaks Capital, which is a very well-known venture firm in San Francisco. The question is, what are those investors seeing? Are they getting an inside peek at what he’s doing? It’s just too early to tell. They’re essentially betting on the man himself. In Silicon Valley, venture capitalists like to talk about how they bet on a founder. And doesn’t matter if they haven’t developed a product or have a path to profits. They’re like, “We believe in this guy, and we’re going to put money behind him.” Sutskever is the most extreme example of that that I’ve seen, having covered Silicon Valley for many years.
Charlotte Gartenberg: That was WSJ tech reporter Berber Jin. And that’s it for Tech News Briefing. Today’s show was produced by Jess Jupiter, with supervising producer Katherine Milsop. I’m Charlotte Gartenberg for The Wall Street Journal. We’ll be back this afternoon with TNB Tech Minute. Thanks for listening.