This post was originally published on this site.
Sendhil Mullainathan: Even if OpenAI behaved perfectly, that’s not going to stop anybody else from developing. I think that the distortion OpenAI has had in this conversation is it’s made everyone think this is a monopolistic or oligopolistic market. It is not at all. It’s a free-for-all.
Bethany: I’m Bethany McLean.
Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?
Luigi: And I’m Luigi Zingales.
Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.
Bethany: And this is Capitalisn’t, a podcast about what is working in capitalism.
Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?
Luigi: And, most importantly, what isn’t.
Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.
Bethany: Luigi and I recorded a podcast with his colleague Sendhil Mullainathan after the Sam Altman brouhaha. Do you guys remember the Sam Altman brouhaha?
Speaker 8: The tech world has been thrown into a particular state of chaos and shock, following the sudden job change imposed on the former ChatGPT CEO, Sam Altman.
Speaker 9: Yeah, that’s right. The OpenAI board fired Altman.
Bethany: It seems like a really, really, really long time ago, not just in the world, but in the world of AI. Now we have DeepSeek upending everything.
Speaker 10: A new, China-based artificial-intelligence startup is shaking up an industry known for its rapid innovation. It’s called DeepSeek, and its biggest advantage, analysts say, is that it can operate at a lower cost than American AI models like ChatGPT.
Bethany: Maybe. Plus, tech titans like Marc Andreessen saying the tech establishment got behind Trump because of the Biden administration’s regulatory approach to AI.
Marc Andreessen: They said, look, AI is a technology that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us: “Don’t do AI startups. Don’t fund AI startups. It’s not something that we’re going to allow to happen. They’re not going to be allowed to exist. There’s no point.”
They basically said AI is going to be a game of two or three big companies working closely with the government, and we’re going to basically wrap them in a—I’m paraphrasing— government cocoon. We’re going to protect them from competition. We’re going to control them, and we’re going to dictate what they do.
Bethany: Luigi, has your thinking on AI changed since we last recorded that podcast?
Luigi: Oh, absolutely. That’s one of the reasons why I want to reissue the interview, because I think Sendhil was very insightful, and I have to say, I was wrong. I think he was right, and I was wrong.
Now, I want to extract some of the insights that he had at the time, and what can we do now?
Bethany: That’s funny, because I listened to it again, too, and I thought, oh, Luigi was wrong. How lovely is this? Luigi actually was wrong.
Anyway, I’d love to hear what you thought you were wrong about, but I thought you were wrong when you were arguing against Sendhil’s worldview that it was a free-for-all, and you thought the market was already far more captured and at risk of a monopoly or an oligopoly.
Particularly with DeepSeek upending everything—maybe, as I said, because I don’t quite understand that, and it’s probably worth discussing. But it does seem that Sendhil was right. Is that the part you were zeroing in on, or do you think you were wrong about something else as well?
Luigi: I might have been wrong about something else as well, but yes, I was zeroing in on that part. There was an important technical aspect of trying to understand, is this a scenario where you’re going to have a lot of barriers to entry? Ex post, after the success of DeepSeek, it proved to be absolutely right.
Bethany: We now toss around this term, artificial intelligence, and we all presume that we know what we’re talking about. But do we? Would you define AI, artificial intelligence, differently?
Sendhil Mullainathan: The way I think about it is, it’s pretty clear that we now have many algorithms—not just the ones like LLMs, like ChatGPT—that do things that are what we would think of, in some vague use of the word, as intelligent.
In 2000 B.C., scribes who did a little mathematical calculation were among the most valuable forms of intelligence. In some sense, Excel is the embodiment of a form of intelligence.
Think of calculating the accounts of GE. Without Excel, we could not have GE and keep its accounts. Do you know how many scribes would be needed just to maintain the accounts of GE? Just for one company, no one human could do in a lifetime what Excel applied to GE spreadsheets does in five minutes. That simple piece of code is doing things that are unimaginable to humans, which are unlocking things that were unimaginable before, which is now that we can do accounting at scale, we can have corporations at a scale we never imagined.
The big mistake everybody is making in governance is that they are acting like this is every other form of regulation we’ve encountered. Like, oh, OK, we’re going to regulate—I know it’s not a good example—a certain kind of derivative. But this is a technology whose shape we don’t really understand and whose evolution we’re unsure of.
To me, this is an extremely unusual moment, and if you guys know some historical analog, I’d love to hear it. But we’re not regulating the known; we’re regulating both what is unknown today and the evolution of it.
Luigi: Since you challenged me, or us, let me try. Let me try—
Sendhil Mullainathan: Yeah, I’d love it.
Luigi: —since we both sit on the campus of the University of Chicago, pretty close to the place where the first controlled nuclear reactions took place 81 years ago. First of all, it’s interesting, the level of risk that Enrico Fermi took to do that on campus, in the second-largest US city at the time. Neither the city of Chicago—nor the US government, which, by the way, was financing the initiative—had any idea what he was doing.
Sendhil Mullainathan: Amazing.
Luigi: And any idea of the risk that he was taking.
Sendhil Mullainathan: Wow.
Luigi: My view is the only major difference is that this stuff, number one, was mainly financed by the government. The government had huge power, was behind it and financed it. And two, paradoxically, because we were in a period of war, there was a sense of more responsibility to the nation.
Sendhil Mullainathan: Oh, I love this. I just looked this up. It’s amazing. It says here that the experiment took place in a squash court beneath the university football stadium. That is amazing. The first controlled nuclear reaction, some guy above is playing squash. That doesn’t really give you a wonderful, calm feeling, does it?
Luigi: No.
Sendhil Mullainathan: Now we’re worried about free speech on campus. Those people were having nuclear explosions. What the hell? This is just, like, crazy s—t.
Luigi: But actually, how do we know that they’re not doing something worse at OpenAI?
Sendhil Mullainathan: I think the nuclear reaction thing is a perfect example. It’s an unbelievably powerful technology, and I think it’s a good starting point.
Let me articulate two things that I think are very different, which actually make it an even harder problem. One, enriching uranium or plutonium to be able to get a sustained reaction costs a lot of money. On the other hand, building a large language model is not, and it’s getting easier and easier.
While it’s to the benefit of companies to tout that they’re light years ahead of everybody else, the open-source models are getting really good. Anyone can put them up. And it’s not obvious that in three years we’ll be like, oh, it’s true, the first generation looked like that, but we learned a lot. They paid the fixed costs. Now, anybody in a lab with a modest amount of money can start to build things that are their own, and maybe it’s not even large language models—whatever the next generation brings.
In that sense, if we use the nuclear-reaction analogy, imagine what the nuclear innovations would look like if it only cost $10,000 to acquire uranium and enrich it. That would be a crazy world, but that’s not that far from where we are.
It’s like $100,000, $200,000, a million to train? Two million? Five million? Ten million? Ten million is not a lot of money. Fifty million? Fifty million is not a lot of money. It’s not such a large barrier, and these numbers are going down.
That’s one big difference, which, like you say, really changes the nature of it. If we think we’re trying to regulate OpenAI, that’s a mistake because the problem runs much deeper than that.
The second thing, which is noticeably different, is that as consequential as a nuclear bomb or nuclear power would be, it’s contained where you can expect to see it. A nuclear bomb is an object you can drop on some location and have tragic consequences.
These technologies have incredibly wide application. Think of what censorship now can look like. Before, you used to have censors who would have to read stuff. Now, can you build algorithms that read everything anyone says and automatically censor? What about, if you’re a dictatorial regime, automatically finding people that you should send your police after?
I don’t want to understate the complexity of the problem. I think what’s happening right now on all the alignment stuff is people understating how hard the problem is and thereby settling for what appear to be Band-Aids. We’ve done something for the sake of saying we’ve done something.
Let me throw out some proposals, and I’m curious what you all think. One proposal that I’ve been fond of is, don’t regulate the algorithm at all, but regulate the user of the algorithm so that the liability sits entirely with them. I don’t care how you did it. You used an algorithm; you posted hate speech . . . It doesn’t matter to me how you did it. You are responsible for it.
That would change a lot of things. For example, a lot of people are thinking about adopting medical chatbots. We know ChatGPT has a lot of hallucinations and things like that. What are the incentives of these people adopting medical chatbots to get rid of these hallucinations? Right now, it’s in this weird gray area. Is anyone responsible if this thing gives bad medical advice?
If we say the person responsible for it is the person under whose banner this advice is being given, boy, would we see the health system become a lot more skittish, as they should be. They shouldn’t hide just because it’s an algorithm giving the advice. If one of your nurses did this, you’d have a medical-malpractice lawsuit. Nothing has changed. We don’t care that it’s an algorithm. That’s not a bad default, I think, to start.
If we start from that default, we would then ask the question, why should I give a safe-harbor clause to anybody to be able to say, “Hey, I didn’t do it, my algorithm did it”? That’s a principled question I could then start answering. What are the circumstances where I would want to give that safe-harbor clause? But the default would be no one has it. We’d have to actively give a safe-harbor clause, and we would give safe-harbor clauses to promote innovation.
For example, we’d say: “Look, we’ve decided there is some value for people who don’t have access to medical care being able to get access to algorithms that read their X-rays. We’re going to give a safe-harbor clause in those situations to expand care, but under these circumstances, so we can see whether this is actually causing more harm than good.” Fine.
Luigi: I agree with you that there, to some extent, regulation arrives too late. You need to intervene from a governance point of view.
But I think I’m very humble after the experience we just went through from the turmoil at the top of OpenAI. Ironically, OpenAI initially was chartered with the best idea. “Our primary fiduciary duty is to humanity.” That’s what the OpenAI charter says. They were governed—they’re still governed, as far as I know—at the top by a board whose fiduciary responsibility is to humanity. But then they seem to behave in a very different way. So, how do we get out of it?
Sendhil Mullainathan: I’ll go back to the wideness of it. Even if OpenAI behaved perfectly, that’s not going to stop anybody else from developing. I think that the distortion OpenAI has had in this conversation is it’s made everyone think this is a monopolistic or oligopolistic market. It is not at all. It’s a free-for-all.
It’s in the interest of the people at the top to convey the idea that they are the ones that control everything. But it’s very unlikely that that level of innovativeness is not going to be much more widespread.
I would even double down on your governance point. Even if we could govern how OpenAI does this, Google does this and—whatever, take the top, Meta, and how they all do this—there are going to be places in China that can do it. There are going to be places in Iran that can download and start running their stuff. They have great technical people. There is a Pandora’s-box problem here.
Luigi: I buy completely the fact that OpenAI is not a monopolist, but I don’t believe this is a perfectly competitive market. If it was a perfectly competitive market, first of all, OpenAI would not need the $13 billion from Microsoft to develop. It could have happily remained a not-for-profit without raising that amount of money.
Again, if this was a perfectly competitive market, when Sam Altman walked out or was forced out, and most of the employees were walking out, OpenAI would say, “No problem, we’ll hire some other people.”
I think we live in a world that certainly is not a competitive world or a monopoly world, but an oligopoly of a few people, and these people end up having a disproportionate amount of influence on the future of humankind.
I think that the option of regulation works very well for this liability issue, but it doesn’t work very well to direct the future of this. This is where I think we need governance, but I don’t know exactly what governance we need because as you said, it’s not just the governance of one individual. It’s a broader governance.
Paradoxically, I was talking with my colleague, Oliver Hart, and he told me that if this situation with OpenAI had happened not in California but in the state of Washington, how different it would have been, because we know that the state of California does not enforce noncompete agreements.
I don’t think that Sam Altman could have walked away and worked for somebody else if he was working at Microsoft in the state of Washington. But in California, he could do that, and so could the 600 people who threatened to leave with him. Paradoxically, this freedom creates your problem on steroids.
Sendhil Mullainathan: What’s great about your idea is, OK, let’s create a fork. We’ll do A and then B. A is, let’s assume this is an oligopoly. I like the way you’ve put it: if it’s an oligopoly, at least we have a good corporate-governance framework to think about. Do we have some board members who are tasked and appointed for the public interest?
We’ve had proposals like this, and I love the way you’re putting it because now this feels like a manageable problem. What type of in-the-weeds regulations could we have? Can an employee leave from here and actually have a noncompete? What kinds of monopsony are we allowed to have?
I think the other fork in the road is, let me try my best just to say, you guys should walk away from this at least keeping the possible hypothesis that we don’t have an oligopoly.
Here’s one way to think about it. The best argument for the oligopoly is that all the training data that OpenAI has, other people won’t have. That’s the best argument, in my mind.
Outside of that, taking the billions that are being spent on compute, the irony is that our ability to do what OpenAI was doing just a year ago, the compute cost of that, has gone way down because that’s the innovation that’s happening. We’re learning how to do this stuff at lower and lower cost.
It’s why the open-source models are actually extremely good. I’d encourage you to try them. Are they ChatGPT? No, but for a funny reason. It’s not obvious to me that they’re not much closer than they seem.
OpenAI has done a bunch of stuff where, instead of just using their language model, they do a lot of stuff inside to make it look good to the average user. The open-source community is just building the language model, the workhorse, and not doing this other stuff. I don’t know what the real gap looks like if you get rid of the fringe stuff.
The other thing is, I’m a little cynical. It is in the interest of these few companies to portray the image that they are the only ones that matter. It’s in their strong financial interest, and so, I am skeptical of that.
Bethany: Speaking of money and financial incentives, I have a really basic question. I think it’s always tempting to think for-profit, bad; not-for-profit, good. If we make these companies not-for-profit or get not-for-profit representatives on the boards, we fix things, and they’re a countervailing force. But is it really so clear when it comes to AI, given the myriad of motivations that people can have? Is it so clear that the profit motive is always bad, and is it so clear that a not-for-profit agenda fixes anything?
Sendhil Mullainathan: I think when we align it as profit, not-for-profit, we hide the true differences of opinion that exist independent of the profit motive as to what socially good means.
In the alignment literature, a lot of the alignment that people are doing, it’s not obvious there’s broad agreement. We’re actually giving power in the nonprofit structure to the people who are deciding, for example . . . Here’s one I agree with. It feels like having these things use racist epithets is not a good thing, but there are many people out there who would say, in a joke, “Why is that a problem?”
But as you go down the types of things we align on, it gets more and more divergent. An example could be, if you look at the latest versions of ChatGPT, it’ll just refuse to do certain things in the interest of alignment. You’re like, really? We’re not supposed to do that? That’s just weird.
This speaks to your point that calling it profit versus not-for-profit hides the fact that we don’t have collective consensus on what alignment would look like, and that is a deep problem. At least we have collective consensus on what profit looks like, for good or worse.
Luigi: I think you are exaggerating a bit. If you go into social issues, of course, we quickly disagree, but when you think about hurting humankind in a major way, I hope that the disagreement is much less. We should learn from the mistakes we made in the past.
For example, Facebook experimented with a lot of behavioral-economic stuff to maximize engagement. At least from what I read, they paid zero attention to the consequences, the harm that it would create, for example, to young women or to minorities that are persecuted in various countries.
I don’t think that protecting the mental health of young girls or protecting the minorities in Myanmar is something where we vastly disagree. If you’re only focused on profit, you don’t give a damn because at the end of the day, they made a lot of money, and they don’t care.
We should not go into the details of being super politically correct, because that’s where we get bogged down. But on some fundamental principles, we should intervene.
Sendhil Mullainathan: I think it gets more complicated far more quickly than you’re making out, Luigi. Take the mental health of adolescents. Should Facebook prevent the posting of photos where people are in bikinis and look very thin? What if I showed you there’s lots of good evidence that that starts to create eating disorders?
This is why content moderation has proven impossible. There is no platform that’s doing content moderation that many people are happy with. Everybody’s bothered.
Luigi: Let me interrupt you there. You are absolutely right that the implementation is very complicated, but I don’t want this to be an excuse to do nothing. In particular, in the case of Facebook, it’s not that they made a mistake on the right trade-offs. They didn’t even consider certain things. From the whistleblower evidence, the evidence was brought to their attention that this was problematic, and they paid zero attention. Now, you’re saying we are not on the first-order condition of the right side, but if you put zero weight on it, it’s zero weight.
Sendhil Mullainathan: I’m definitely not saying we shouldn’t do anything. I think I’m asking the question, what should the nature of what we do be that reflects, for example, the heterogeneity of opinions?
Here’s an example. I’m not even saying this is a good idea, but if we just took the idea that we’re going to have board members that represent the public interest, the fact that public interest is varied actually raises a deep and interesting question of, how should those people be chosen?
It’s not unreasonable to say we’re going to have some form of public participation in choosing those board members. Maybe the right thing is to actually have an equilibrium where some companies have some opinions reflected and other companies have different opinions reflected. This is not an argument for doing nothing. It’s more an argument for saying, given that we live in a pluralistic society, what does it mean to regulate in the collective interest?
I don’t think we want to end up in a situation where we do nothing. I also don’t think we want to end up in a situation where we only do that thing we all can agree on, because that seems too little. All of us will be unhappy. We clearly need some innovation in governance. How are we going to get experimentation in governance going? That’s what we need at this point. We need some experiments in governance.
Luigi: We had on our podcast earlier this year Hélène Landemore, a political scientist at Yale, and she’s a big supporter of the idea of citizen assemblies. Basically, it’s a randomly drawn group of the population that deliberates on these issues. What do you think about this idea?
Sendhil Mullainathan: I love this. These technologies also enable certain kinds of governance we never had before. Why do we need to have a board member that we picked from the elite?
Citizen governance is awesome. We could have people holding votes, a certain subset of people, on specific design choices, on specific alignment questions. There’s just so much that you could imagine doing, and I think that it’s going to require some amount of just trying and seeing what happens.
If there was a way we could encourage this, either in the AI space or not even in the AI space, let’s just pick a traditional sector . . . What would it mean to have citizen governance in, I don’t know, utilities? That’s been a persistent question. Utilities are supposed to be in the public interest. What would it look like to have some governance innovation in the utility sector?
My view is, these technologies are going to make us all better off, for sure. The question is, how do we make sure that happens, because there is risk associated with them? For me, governance, regulation, it’s all just a way to get us to what I think is a really good state that we couldn’t imagine before.
Luigi: Thank you very much, this was lovely.
Sendhil Mullainathan: This was so fun, thank you.
Bethany: Yeah, this was so much fun. Thank you for the time.
Sendhil Mullainathan: I really enjoyed this.
Bethany: I did, too.
Luigi: The question of regulation that we asked Sendhil is even more interesting now, after realizing that it’s very difficult to have regulation because competition will be so intense, and entry will be so diffuse.
When I think about regulation, I think about three potential objectives. One is the issue of how to manage liability as a result of AI. What Sendhil said in his interview is perfect. We should shift the liability to the people using AI, not to the people generating AI. Maybe we should have some exemptions, but when I was thinking about the exemptions, I was thinking that the safe-harbor clause we introduced at the beginning of the internet phase in 1996 is still haunting us. It’s very dangerous to have those exemptions.
Bethany: Can I interrupt you really quickly? When you talk about the exemption that was granted, you mean Section 230, right?
Luigi: Absolutely. It was done apparently with the best intentions. We should learn from past mistakes, and we should try to avoid them.
Bethany: Agreed. OK, your second objective.
Luigi: The second objective is the issue of the timing of displacement. Every technology tends to have substituted for labor in some dimension and to create some disruption. One interesting factoid that I learned is that people now characterize AI as a double exponential. An exponential of an exponential is much, much faster than anything we have seen before.
Is it OK to let it rip? I understand that you’re always afraid that if you start putting on limits, you might delay some beneficial innovation. But this idea of purely letting it rip is pretty dangerous, in my view. I feel that the conversion of Silicon Valley to Trump is precisely for that reason.
The speed of innovation is breathtaking, which is great from one point of view, but there will be a lot of losers along the way. What you do is a really, really important question.
The third one, which is even deeper and probably the most problematic, is this issue of alignment. The risk that like in 2001: A Space Odyssey, HAL takes over the mission is pretty serious. Honestly, even after re-listening to the conversation with Sendhil, I’m a little bit at a loss on how to do it.
Bethany: Let’s talk about those in order, starting with your second point about the timing of displacement. We’re now watching the political upheaval that resulted in part from not managing globalization. Maybe slow it down to give people a chance to adapt. Maybe you can’t stop it, but you can slow it.
You can make the adjustment to displacement easier, given the risks not only to people and their jobs, but as we see now, to entire communities and children, that it’s not something people tend to recover from. It’s a really important point.
Sendhil was very optimistic. You, too, are very optimistic, but I’ve read some interesting things recently about how the numbers aren’t showing up yet. Despite all the hundreds of billions that are being spent, the numbers aren’t showing up in any statistics yet, or at least in any productivity statistics.
I know you can argue that with innovations from electricity to computers, it took a while for the numbers to show up in any official statistics. Maybe it’s just filtering through the economy very slowly, but maybe its uses are a lot more limited than people think, and the idea that so many tasks are going to be replaced by AI just isn’t true. I’m not sure anybody has the answer to that yet.
Luigi: Since I’ve been wrong in the past, I’m hesitant to make bold predictions here, but my limited understanding is that the innovation is for real.
As you said, there’s a very famous quote by Robert Solow in the late ’80s that you see the effects of computers everywhere except in the productivity statistics. The timing is characteristic of economists that when someone points out an irregularity, that irregularity changes.
The moment he said it, the ’90s brought a boost in productivity that was, by and large, the result of computer innovation. It takes time for organizations to adapt and change.
I’m very optimistic long term, but in the same way in which I retrospectively think the Industrial Revolution was a great thing. Now, if you ask the weavers in England in 1810, it wasn’t that great for them.
How do you manage this trade-off? One way to manage is to move ahead as fast as possible and use guns to prevent any problems. That’s exactly what England did with the Luddites. They were killed.
Now, yes, that sped up the introduction of mechanization in the textile industry by a few years. Was it worth it? If you said, do you want to block industrialization, I would say no. But if you slow it down, maybe that would not be such a terrible thing.
By the way, if you look at other countries, France was not that far behind England. But I think that they managed the process a little better. We didn’t have massacres of Luddites along the way in France. It’s not inevitable that in every technological revolution, you have to massacre the losers.
Bethany: I certainly hope that’s not the case, at least in the modern world, although you can argue that some of what we did to communities in globalization amounted to a massacre.
I do think that broadly read, this is part of the problem with figuring out what to do with displacement. We don’t even know what the timing of a displacement is going to be or what it’s going to look like.
In other words, if you look at these remarkable innovations that ended up changing the world, from electricity to computerization, it wasn’t clear immediately what they were going to do and who they were going to hurt and who they were going to benefit. Part of the problem with managing this transition is that we just don’t know yet.
Moving on to your third point about safety, I’ve also thought that’s really challenging, because as you can see in the—fight’s maybe too strong a word—disagreements between Anthropic and OpenAI, the resignations of people from OpenAI, even people in the field don’t agree on what makes this technology safe and what makes it unsafe. How do you decide what safety is when even those who are most enmeshed in the details don’t know what safety is?
Luigi: If you are becoming an expert in a field, you’re bound to be optimistic about this field. You don’t want to massively invest in understanding that field just to stop what is going on. Even without adding any financial incentives, but out of pure selection, all the specialists in AI will be overly optimistic about the impact of AI by design.
It’s very difficult to think about how to follow what’s going on in an objective way, let alone regulate it. Say you would like to have some government experts overseeing it. To what extent?
I’m sure you have seen the movie Oppenheimer. I don’t remember who was the head of the military who was in charge of the entire operation. He had limited understanding of what the atomic bomb was and the consequences, so much so that they were all watching at a distance, but not that distant. Most of the people in that experiment died of cancer shortly afterward. I don’t know today if we have the expertise in the government to oversee this without being completely captured by the techno-optimists.
Bethany: It does come back to Trump and the oligarchy. I worry a lot that we’re no longer capable of coming to any kind of smart regulatory strategy, because so much of our apparatus has been captured by lobbyists who speak in the name of industry.
I worry that if that was true under the Biden Administration, it’s going to be true times 100 under the Trump Administration. Certainly, obviously, people in technology like Marc Andreessen are betting on that.
I worried when I read his interview because he was so harsh about the Biden administration and basically said that they said they wanted to kill AI, that we were up against what looked like the absolutely terrifying prospect of a second term, that they were just going to make government control it all. We were going to make sure that AI will be a function of these two or three large companies.
Was it really that bad? Is that actually what the Biden administration’s people said, or is that Andreessen’s take on it or spin on it now, to scare us all about regulation?
Luigi: That’s a very good question. I think that both parties are sufficiently heterogeneous that you can easily pick and choose your voice if you want to make a case. Immediately after the interview with Marc Andreessen, Ross Douthat interviewed Steve Bannon. Steve Bannon thinks the worst of Marc Andreessen and thinks that Silicon Valley is all evil. He sounded like a Democrat in his view of Marc Andreessen. There are voices on both sides.
What I’m getting more and more convinced is that the techno-brothers or bro-tech, whatever it is—the bro-oligarchy? Broligarchy? The broligarchy. The broligarchs are going to push for some form of safe-harbor clause for AI.
If you apply what Sendhil originally suggested, which is very strict liability for every user of AI, people would be very reluctant because they don’t understand it. Between the devil you know and the devil you don’t know, do you really want to try the devil you don’t know?
At this point, self-driving cars are probably more reliable than most drivers. I’m not an expert, so I’m just positing this. But if you are the head of a trucking company, do you really want to decide tomorrow to substitute all your drivers with self-driving cars? Probably not, especially if you would pay a liability. Now, if they exempt you from liability, then you would go full force.
Bethany: Part of the challenge in this will be determining the difference between what a very powerful tech titan may say the technology does and what it actually does.
To take your self-driving car analogy, if you had listened to what Elon Musk said about his self-driving cars, you might be dead, whereas if you had gone with others in the industry like Waymo, maybe the self-driving cars are safer than actual drivers.
It makes me all the more concerned about the broligarchy. There is this gap between what people say is true and what is actually true.
In theory, I like your idea of liability, but how would you measure it? In other words, if the damage that AI is going to do is allowing a foreign government to infiltrate the US because AI has been unleashed, how do you go after that foreign government or these foreign stateless actors? Or if the damage that AI does is it wipes out huge numbers of jobs throughout communities, who pays for the damages? To me, the most likely and most scary suite of damages from AI aren’t ones where there’s a target for the liability.
Luigi: Actually, I disagree. Let’s say Facebook applies AI, and that allows a foreign government to interfere. You go after Facebook; you don’t go after AI. In the same way, if I’m a doctor and I decide that, all of a sudden, my X-rays are read by a machine rather than a person, and I miss some significant cases, I’m liable.
I would organize an agency that is good at testing this, to make it easier to prove a case. The complexity of this stuff is such that the only ones who control this technology are the broligarchs, so you have to trust their words.
On the other hand, if you have objective testing . . . One of the great innovations that was in part the result of our friend, Ralph Nader, is that there are crash tests for cars. The crash test for cars is not just the car company doing the crash test. It is an agency doing the crash test and exposing that, sometimes, for example, bigger cars are not necessarily safer. Enormous improvements have been made in the safety of cars as a result of those crash tests. If I were in charge, I would try to build more agencies able to do the tests.
Bethany: I hear you on the Facebook part of it, but I think that’s just going to be a sliver of the potential damages done by AI.
As much as I like your idea about building agencies that can test for everything AI might do, the way Ralph Nader did for automobiles with crash tests, Sendhil’s point is that AI isn’t going to work that way, because the number of applications is just so immense.
What do you do? You’d have to build 10,000 different bureaus to have the expertise to figure out whether somebody got damaged or whether they didn’t. It would be full employment for lawyers, for sure, but I’m not sure that your idea is feasible, if AI is as manifold, manifest, all-encompassing as we’re being told it is.
Luigi: I’m not so sure that it is full employment for lawyers. Full employment for computer people and testing people. But we have a lot of these agencies already. It’s just bringing up their capabilities.
Bethany: In a world where Elon Musk is gutting all of our agencies, does this become enough?
Luigi: No, that’s exactly the point. Now you see why he’s gutting all the agencies, because he wants AI to rip without any consequences.
I’m with you. This system is not going to eliminate the elimination of jobs, but it might slow it down a bit, and that’s the best we can hope for.
You put the finger on exactly the right place. We have some capabilities. We need to increase those capabilities, because whenever you present any new device in medicine, there is a procedure to be approved. Whenever you introduce a new car, whenever you introduce a new product, there are a bunch of product-safety bureaus. The point is that they’re not equipped, and so we need to speed up their technology and their competency, very fast.
Bethany: Even if you set up these bureaus, these thousands of bureaus that could evaluate these claims, then they would be duked out by lawyers on each side. To make that whole thing work smoothly, you might need a full-scale revision of our court system as well, because otherwise, the court system would get run over by these cases.
To go back to something Sendhil said, what does it mean to regulate in the collective interest? We don’t want to do nothing, but we also don’t want to do the only thing we can all agree on. We need pluralism in this, and we clearly need innovation in governance, some experimentation in governance.
I think that’s exactly right. But can anything in the US adapt in real time or at speed? And can anything in the US adapt at real time and at speed when all the forces are pushing against it existing at all? I am worried about the answers to those questions.
Luigi: The French came up with their large language model, Mistral, which seems to be very good. What is interesting is that even the Europeans caught up. I think that’s an indication that the cost of copying goes down over time.
My understanding is that the way Mistral succeeded, in spite of the rigid regulations in Europe about not using data for privacy reasons, is that they used OpenAI to train their large language model. That automatically cut down dramatically the cost of training new models and really makes AI a much more competitive sector.
Bethany: I don’t know that that’s true, because what I don’t understand is, then, are the advances of this new French model and DeepSeek conditional on OpenAI’s advances? In other words, is it truly a competitive market, or is it simply fast following, and any real innovation would have to be done by OpenAI because other people are just building off what they’ve done.
Can they build off it and leapfrog it? Or can they only build off it and be as good, but not better?
Luigi: Even if you take the second interpretation, this will have a lot of implications, for example, for regulation. You are making it much easier for other people to enter. You regulate American AI, and all of a sudden, Europe does something, or the next thing you know, not only China, but maybe some people in India will do something. That makes the point that Sendhil made a long time ago very salient.
Bethany: Yeah, it’s not only huge for regulation, but it’s also huge for consumer choice or for business choice. If another model is much cheaper than OpenAI and doesn’t quite have the functionality, isn’t quite as up to date as OpenAI is, but it’s just one step behind, then that changes the entire pricing dynamic of the whole industry, right?
Luigi: Absolutely. Not only the pricing dynamic, but as you said, consumer choice. Think about Google. It’s not like Bing was able to copy Google in search engines and have something close.
Bethany: The only thing that I think is a contrary point to this is that when all the big firms announced their capex budgets for AI, it was double what the market was expecting. Instead of spending less on capex, because they’re cheaper models, now they’re spending more.
I don’t think I understand why that is. In other words, if now you don’t have to spend as much because DeepSeek showed you can do this so much less expensively, you would have expected the big tech firms’ AI budgets to decline.
Luigi: Don’t you think that because of competition, they will invest more? This is a tension that it goes back to Schumpeter. Is competition only driven by the expectation of future rents? Or is it driven by the fear of losing the existing rents and to be ahead of your competitors? As the competitors are coming closer and closer, everybody’s trying to outdo the others, and, as a result, you have more investments.
Bethany: I’m not sure it applies. Here’s why. I don’t know is if the investment is in building their own AI capabilities or in using existing AI technology to animate productivity. If it’s the former, then yes, it would apply. But if it’s investing in order to be able to use AI capabilities, then you would think a cheaper large language model would mean that the amount of investment required would go down. Is it competing to develop large language models, or is it competing to deploy those models throughout somebody’s business? Those are two different things to me.
Luigi: Of course, they are two very different things. I’m not an expert, but I think that when you look at the big companies, from OpenAI to Microsoft to Mistral to Anthropic and DeepSeek, they are competing to build better and better large language models. They are competing in that space. They see an advantage to being ahead. As the others are approaching, they spend even more to be ahead.
Bethany: Yeah, but I think the capex number encompasses or is mostly the companies that are employing AI. Again, if the cost had just gone down dramatically, you would expect their spending to have gone down dramatically. There are these data points that just don’t quite line up enough for me to be clear about anything.
Should we talk about Elon Musk? He got more interesting, too.
Luigi: Absolutely. What do you think about his bid?
Speaker 3: All right, right now, a rivalry between big tech billionaires is playing out as we speak. We’re talking about Elon Musk and OpenAI CEO Sam Altman. Altman is firing back after a Musk-led group submitted an unsolicited $97.4 billion bid to take over OpenAI.
Bethany: I don’t know. I guess my first reaction is a slightly strange one in that I’m relieved that the tech moguls hate each other, and that Sam Altman was so dismissive of Musk’s bid and clearly so angry about it. I like that a lot more than a world where they’re all buddy-buddy. You can’t really have a broligarchy if they’re all at each other’s throats, right?
Luigi: You’re absolutely right. I think it’s a pretty sad state of affairs where we have to hope for the oligarchs to fight each other to have a little bit of freedom. I was expecting a little bit more from the United States when I moved here.
Bethany: That’s a fair point. Maybe my expectations have been so diminished by the last bunch of years that I’m just relieved to see squabbling.
Luigi: There is a very clever point that I don’t emphasize often enough. Remember, OpenAI is in the middle of a transformation from a not-for-profit to a for-profit. Of course, I’m not a legal expert on the topic, and when I ask my legal colleagues, very few are experts on this particular topic.
What you need to do to move from a not-for-profit to a for-profit entity? One of the rules is that you have to show that you are not expropriating the not-for-profit to for-profit, and the ultimate goal that you’re trying to finance with the not-for-profit in this transformation. In the moment in which Musk is making, what, a $94 billion . . . what is the number?
Bethany: Who knows? A billion here? A billion there?
Luigi: Who knows? The offer to OpenAI is basically putting a floor on the value of the not-for-profit. Now, in the transformation, they need to show that they are leaving $94 billion to the charity. He has made the transformation much more expensive for Sam Altman.
Bethany: See, I’m delighted by that. Maybe that’s not billionaires squabbling, but I’m delighted by anybody throwing a wrench in anything. Is that terrible? That must betray something really awful.
Luigi: It’s not terrible because in this case, you are delighted by the fact that $94 billion will be available for public charity.
Bethany: But I wonder if he’s doing it just to be a jerk, just to throw sand in the gears of a competitor. Or if he’s doing it because he’s absolutely serious.
It may not even matter because Musk seems to have an incredible amount of luck in that, you remember with Twitter, when he set out to buy it, it was a joke. He didn’t actually want to buy it. He got forced to buy it, and he actually bought control of all three branches of the US government. It turned out to be incredibly lucky for him that his feet were held to the fire, and he was forced to buy it.
Even if he’s setting out to throw sand in the gears of Sam Altman, it may end up working out to his advantage somehow because he seems to be lucky that way.
Luigi: I disagree. I think he did it on purpose. He knew the power that was coming with Twitter and he paid for that power. When he claimed that he wanted to get out, it was just a way to get a lower price and a negotiation tactic. But I think it was there all along.
Anyway, at least one time, there is something for the benefit of humankind.
Bethany: Maybe so.