This post was originally published on this site.
Progress in AI shows little sign of slowing down. Last week, Tyler expressed amazement at the latest products from OpenAI. It’s impossible to know where all of this is going, and I don’t find it difficult to imagine either utopian or dystopian outcomes.
Yet one thing I’m confident of is that there’s no reason to worry about AI taking our jobs. Larry Summers has said that AI has the potential to replace “almost all forms of human labor,” particularly in white-collar professions. Noah Carl has more recently reviewed scientific literature showing that we’re getting to the point where LLMs can do most or all cognitive tasks at least as well as humans, and notes that the technology will only get better.
Optimists can reply that we’ve heard such arguments before, but humans have always turned out fine. Check out this graph of the US unemployment rate from 1901 to 2021.
Our era looks completely normal. There are few absolute lessons in history. But here is one that has as good a record as any: Luddites, or those who want to restrict technology for the sake of preserving jobs, are always wrong. They might seem to be correct if you narrow your focus to a very short time horizon or one small group or industry, but in no historical era would humanity have been better off in the long run by listening to them.
Take that lesson, and combine it with current data, and you can see there’s nothing yet in the unemployment numbers to suggest that the availability of jobs is decreasing. That creates a strong presumption against such arguments.
Pessimists, however, sometimes make an analogy to horses that has been traced to the twentieth century economist Wassily Leontief. While the number of horses and mules in the United States grew sixfold between 1840 and 1900, their population declined by 88% as of 1960. Humans made it out of the Industrial Revolution just fine, but horses saw a population collapse soon after machines were invented that could do most things they could better. If one species can be replaced and eventually see its numbers fall in response to technological change, why not us? The idea that AI is fundamentally different from what has come before seems reasonable.
Yet, for reasons I will explain below, I don’t think AI taking our jobs is something to worry about. Moreover, such fears are potentially dangerous, because they can lead to misguided policies that can slow economic growth. Doom remains a real concern, but “think of the jobs lost” remains an awful guide to policy, just as it always has. I present three reasons for this below, and then argue that even if I’m wrong, redistribution through the welfare state is a trivially easy solution to job loss, as well as a likely one, in a world where AI ends up having as large of an impact as many believe.
The first reason not to worry about losing jobs is that many of them that rationally should have been eliminated already continue to exist. To take one example, I’ve been on Adderall for a decade and a half. Every three months, I have to go to a medical professional and renew my prescription. This used to have to be done in person, but since covid I’ve thankfully been able to get my appointments over the phone. So I’ve been “in therapy” for around 15 years, and every conversation with the doctor goes like this.
Doctor (more precisely, nurse practitioner): How have things been?
Me: Good.
Doctor: So the medication is still working out?
Me: Yeah, nothing has changed.
Doctor: Ok, do you still go to that pharmacy I have here on file?
Me: Yes, thank you.
I then go to the pharmacist, they ask “what is your birthday?”, I give it to them, and they finally hand over my precious drugs.
You don’t have to think much of ChatGPT to believe that it can handle all of this already.
Why don’t I just buy the drugs I want directly from the companies that make them? The answer is government regulations. I can only have Adderall if a human with certain degrees fills out a form, and another human with different degrees reads it and then gives it to me. The process of paying for all this goes through an insurance company, which creates even more jobs. My psychiatrist has a receptionist, and is part of a medical group that pays rent for a building, providing money to a landlord, and so on. The psychiatrist and the pharmacist both had to go through years of medical training to get to their positions, creating jobs for professors and administrators all along the way.
The entire process of getting me Adderall should not require this many people. But government is paternalistic. It has rules about which products you can buy, and makes you jump through hoops to get them. The healthcare system is set up to pay for treatments and drugs people “need” rather than those they simply want, and drawing this arbitrary line consumes a lot of manpower and effort.
I don’t see how any of this gets replaced by AI, which can already make small talk and ask for my birthday. It doesn’t do to say that the state will find a better way to engage in paternalism and achieve just outcomes by simply asking AI to devise a more rational system. Pointless regulations stay on the books for decades or centuries, and fundamental reforms to the medical system seem unlikely at any point in the near or medium term.
Similarly, there’s a recurring debate about whether technology is going to “disrupt education.” People like Bryan Caplan note that kids don’t retain much of what they learn in school. If you believe that the education system exists to teach people things, the world looks quite confusing. Bryan reminds us that you can go right now and get a world class education at Harvard or MIT by just sitting in classrooms where lectures are occurring, or even watching them on YouTube. Nobody does this because the function of the education system is some combination of signalling, arbitrary credentialism, and socialization. This is why kids still go to college and graduate school. If they want to become the kinds of people to ask me how my day was so I can get Adderall, they’re going to have to spend years jumping through hoops.
Noah Carl thinks the fact that AI can now answer questions from psychology tests has important economic implications.
Peter Scarfe and colleagues submitted AI-written answers to an online exam for a psychology course at a major British university. They found that 94% of the AI-written answers went undetected, and that the AI-written answers were awarded grades half a grade-boundary higher than those written by human students.
Ok, but this raises the question of why we have psychology courses in the first place. There is an implicit assumption here that there is some economic reason for us to ask 19-year-olds to memorize facts about Maslow’s hierarchy of needs and then regurgitate them. Once a machine can ask the same questions and get the same answers, then the professor and the student both become superfluous. In reality, the main reason we have psychology courses is that we have a society largely built on signalling and pointless credentialing. We can choose to continue doing this or stop, but whether AI can answer the same questions that undergrads can or design and grade their tests has little to do with it. Most psychology students don’t become psychiatrists, and, as we have seen, even those who do are largely engaging in make-work.
Even when the law doesn’t have hard requirements about things that must be done by humans, norms and the tort system will also keep people employed. I’m amazed at how good ChatGPT is at legal analysis. It seems more perfectly designed to dissect legal arguments and court cases than just about anything else in my experience. Is the next Supreme Court Justice therefore going to be an AI? Probably not. While the Constitution does not specify any requirements to be a federal judge, it’s been taken for granted that appointees need to be human. Human judges seem to enjoy the company of human lawyers and human clerks, and will probably be a lot less favorably inclined towards a firm that employs only LLMs.
How much of the economy is fake like this? I’d say healthcare, education, and government are to a large extent made up of nonsense jobs that could easily be replaced or eliminated completely, but are not for reasons having to do with norms and the state of the law. These fields compose about 30% of the workforce. Some of the jobs even in these industries will certainly be replaced by AI in the coming years. One can imagine a university system asking ChatGPT to conduct a spreadsheet analysis rather than hire someone to crunch their numbers. But occasionally they’ll decide to splurge on getting their friend or relative a job. Even if universities can’t justify hiring humans to do the same things that it takes an LLM to do in seconds, the expansion of administrative staff at colleges as they have been installing rock climbing walls and creating new diversity offices shows just how creative people can be in finding new work for themselves and others.
There’s a wide ranging debate right now over self-driving cars, with many making the credible claim that they’re already safer than human drivers under most conditions. But everyone in the industry understands that they need to be almost perfect to be allowed to operate. The public doesn’t care if humans cause accidents that kill people, but if a machine makes a major mistake it can shut the industry down. To be allowed to drive around San Francisco, Waymo has to provide a detailed account of each accident. Even in collisions where a human is at fault, which appears to be the vast majority of cases, it ends up being bad PR for self-driving cars. We start with the presumption that humans should be allowed to drive except in very rare conditions, and no one has to justify allowing an individual keeping their driver’s license once they take a very simple test unless strong evidence piles up that they are a hazard on the road.
What’s interesting here is how much data that self-driving cars have to produce in order to be allowed to go to market. At some point, they’ll have enough and will reach a performance level that makes them replacing humans a no-brainer. But technologists in most other fields will not be able to make as strong of a case. Will politicians ever let machines replace doctors, surgeons, and radiologists? Even when firms are allowed to mostly rely on machines, they’ll probably keep a human supervisor around as lawsuit insurance to be able to win over juries, the way DEI programs have worked as a defense against anti-discrimination laws. Once in a while, you’ll hear about how AI can do some job better than doctors. Yet the LLM is not allowed to give you your medication, no matter how smart it is. Maybe the surgeon can be replaced by a machine, and some parts of his job have been already, but if you worry about legal liability, you’re probably going to want to spend some money on a human to have insurance in case anything goes wrong.
These ideas might sound similar to those of David Graeber, but I have an almost opposite understanding of the economy. He seems to think that markets are a scam and everyone who isn’t a coal miner or something isn’t adding value. This takes things way too far, and is an insult to the human mind and what it has been able to accomplish. See for example the literature on how much value consultants can add to a business. Graeber is a leftist, so wants more government control, and has therefore created a kind of backwards analysis where it is the free market that overwhelmingly produces nonsense jobs. His methodology of determining which jobs are useful by asking people if their jobs are useful is simply not very good. Nonetheless, there’s a much simpler variant of his theory that is true, which is that laws encourage certain jobs to exist that don’t make any inherent contribution to society that isn’t tied to them being required to be there.
The Wall Street Journal recently reported on a story about a painting sold at an art collection for less than $50. An art-research firm bought it and has spent $30,000 in a quest to prove it was painted by Vincent van Gogh, in which case it might be worth over $15 million.
Note the obvious fact that the value of this painting has nothing to do with its inherent qualities. Anyone with access to the right AI software can instantaneously make a picture that is indistinguishable from what a lost Van Gogh would look like, if not paint it themselves. The entire value of this painting depends on whether it was made by a specific human being who died in the nineteenth century.
This is an extreme case, but the point is that a lot of the service economy, including entertainment, is like this. People want goods and services to be provided by specific humans, or humans more generally. If you look at art criticism, like for example discourse around David Lynch films, you see that people discuss not just the product itself but the person behind it and what his intentions were. In the future, no one will be able to prove that an artist didn’t just outsource much of his work in creating a movie or song to AI, but the product will not have much economic value if there isn’t a human being attached to it. A real person’s name becomes something of a luxury brand.
This gets at the problem with the “sexbots” discourse. Every so often a headline will go viral telling us that by some year, either men or women will no longer need the opposite sex because machines will be just as good at making them feel physical pleasure. This never happens, and it never will.
It’s true that young people are having less sex, but that’s not due to AI. Part of the experience of romantic relationships is knowing that there is another human who you make feel a certain way. Even if the relationship is unhealthy and has aspects of humiliation or sadomasochism, it’s just not the same if the other party is an automaton. I think this is even true for pornography, or more wholesome forms of gooning. I recently posted a picture of a Chinese girl dancing on X and asked whether it was AI, and the replies were full of men hoping that it wasn’t. It’s an interesting question what happens to pornography once AI and real life are indistinguishable. Young women probably won’t be able to make money off porn without doing meetups or finding other ways to establish their real-life identities. Perhaps this largely kills the human-based porn industry? Or will it increase demands for old fashioned prostitution, where you can know you are dealing with a flesh and blood human being? Whichever forms things take, the market will adapt, and what is real will be sold at a premium.
What applies to porn applies to every kind of influencer. You read me not because of my ideas alone. You also care that there is a person behind them, who is grappling both intellectually and emotionally with societal trends and the news of the day, just as you are. If I tell you about an experience from my life, you care that it actually happened, and that I’m telling the truth about how it influenced me.
Every form of entertainment therefore is safe. I also think that the preference for humans has implications beyond the future prospects of prostitutes and Substack writers. Consumers will, for example, tend to prefer retail stores and restaurants staffed by humans over those that aren’t. Of course, sometimes the savings of innovation in terms of costs and convenience will be so great that they will force jobs to be lost or establishments to close anyway. Yet while employment in retail has been decreasing on a per capita basis over the last decade and a half, the number of individuals working in restaurants has gone up, despite automation in the form of things like being able to pay your check on an iPad.
If people just want to eat, they can do that at home. Going to a restaurant is a social experience that includes waiters and hostesses in addition to other patrons. This is especially true of bars and clubs. Just as with porn, even if you could make robots that in their looks and behavior are indistinguishable from humans – and we’re a long way from that obviously – customers will want the real thing. If some businesses try to fool patrons by passing off humanoid robot staff as actual people, they can be dealt with through normal laws against fraud.
A couple of months ago I was at a street market and purchased a miniature-sized notebook with a nice covering that a girl told me she made by hand. I knew I was buying it because I appreciated the design in the context of the story behind the notebook, and I’m completely certain that if I saw the exact same product sitting on a shelf at Target I wouldn’t have done the same. This obviously doesn’t apply to cans of Diet Pepsi, which nobody crafts by hand. Standardized mass market commodities are increasingly made by machines, but the effect of that is a wealthier society that has more time and money to spend on the human and artisanal.
The preference for humans interacts with legal requirements. People will want human rather than robot waiters, and the same instincts cause citizens as voters and jurors to prefer laws that require human supervision over machines. In the private sector at least, this is different from make-work, since the desire for humans as the providers of goods and services is no more or less real than any other preferences people have.
So jobs in government, highly regulated industries, pornography, art, hospitality, restaurants, and political writing are all safe. What else? I would guess that most jobs involving substantial movement in the real world won’t be replaced for a while. On this point, I refer to an excellent article by Katja Grace on why humans don’t trade with ants.
When discussing advanced AI, sometimes the following exchange happens:
“Perhaps advanced AI won’t kill us. Perhaps it will trade with us”
“We don’t trade with ants”
I think it’s interesting to get clear on exactly why we don’t trade with ants, and whether it is relevant to the AI situation.
When a person says “we don’t trade with ants”, I think the implicit explanation is that humans are so big, powerful and smart compared to ants that we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?
Yet, as she points out, ants are actually good at chasing away other insects, cleaning areas that are hard for humans to reach, surveillance, digging tunnels, and perhaps dozens of other things we could potentially pay them for. The reason we don’t trade with ants isn’t that they’re inferior to us, but rather due to us lacking methods of engaging in cross-species communication and agreeing to mutually beneficial arrangements. We can’t release a colony of ants into a cafeteria and let them have the food if they agree to then go home and stay away when they’re not wanted.
In terms of technological development, it doesn’t appear like we’re anywhere near replacing the majority of manual labor jobs. It may not be worth the trouble to automate such work, when we can instead rely on creatures shaped through billions of years of evolution to move around in the real world, just like it would be more cost effective to employ ants if only we could talk to them instead of developing machines with similar capabilities.
Grace’s article is imagining a situation where we are at the mercy of AI agents, and argues that even in that case they would still have use for us. I’m skeptical of this ultimate scenario, but the argument is even stronger if we assume that the takeoff in AI will simply cause major concentrations of wealth. People with money will look for ways to spend it, and whatever it is that humans do best will still be worth paying for, even if the plutocrats are misanthropes and we are as inferior to AI as ants are to us.
To be fair, using ants as our example species is cheating a bit. These are truly remarkable creatures. Does the thought experiment work for pigs, gerbils, or iguanas? Maybe a little bit, but certainly not as well. Yet I’d argue we’re a highly adaptive species, closer to ants than wild turkeys. Plus, AIs will be made of completely different material rather than being carbon-based life forms, and this implies that the difference between us and them will be so great that there will have to be some areas in which we have an advantage.
The first thing to realize about a world where all white-collar work can be done cheaply is that it would be a much wealthier one. Even if I’m wrong about jobs and there turn out to be relatively few cases of individuals willing to pay anyone to do anything anymore, providing for people’s material needs should be trivially easy.
A back-of-the-envelope calculation can demonstrate this. Let’s imagine a world where AI truly can do all brain work. I think that getting 5% growth a year in such a world would be a ridiculously conservative estimate, as this would simply match economic performance in the 1960s. This would mean that our economy would be almost 3.5 times larger in 25 years. Assume, furthermore, that the size of government grows in tandem with GDP.
As of 2022, the federal government spent $1.19 trillion on 80 different welfare programs, not including entitlements. According to ChatGPT, the US at the state, federal, and local levels, spends about $4-4.5 trillion on direct welfare programs a year. Let’s take the low end of that and subtract the approximately $1 trillion it collects in Social Security and Medicare, since we want to assume mass unemployment. That leaves about $3 trillion in welfare spending.
If the economy grows by 3.5 times in 25 years and the portion of the economy spent on welfare remains constant, that would be about $10.5 trillion we’ll be putting towards direct welfare payments by 2050. The US population is projected to be about 380 million by then. We would therefore be spending $27,600 per citizen on direct welfare. Everyone would surpass today’s poverty line from that alone if all of that was translated into direct payments. The $27,600 per citizen estimate of course ignores administrative costs and other inefficiencies, but I’m also ignoring what are in effect non-direct transfers in the analysis like jobs through direct employment and contracts, and the subsidization of education.
All of this indicates that greater wealth, even if it destroys the vast majority of jobs, makes the problem of caring for everyone’s basic needs trivially easy. Some ask whether states will actually engage in enough redistribution to make a difference. Yet the historical trend has been for government spending to make up a higher percentage of GDP as wealth increases. Here is data for the US, UK, China, and Japan from 1800 to 2022.
Countries in the last 200 years have on this measure gone from less than 10% to somewhere between a third and half. Government for at least two centuries has grown at a faster pace than the economy as a whole.
Why would this trend reverse in an era where AI is causing massive income inequality and threatening wide scale unemployment? Remember, all we need to do is assume constant government spending as a percentage of GDP to arrive at a world where there is enough direct welfare and other forms of government activity provided for everyone to cross the poverty line. More likely, the government spending-to-GDP ratio goes up.
An important point to note here is that we’ve already in effect all but eliminated poverty through welfare spending in the United States. If you look at official Census Bureau data you see there hasn’t been much change in the poverty rate since the 1970s. However, as the economist John Early notes, “official estimates of income inequality and poverty omit significant government transfer payments to low-income households; they also ignore taxes paid by households.” Here’s what the data looks like when you rely on the Census Bureau, compared to other analyses in which government transfers are included along with other adjustments.
It’s quite the trick that the left has pulled off, creating a welfare state to address the issues of poverty and inequality, but then not counting its transfers to calculate current levels of poverty and inequality, justifying even more transfers! It’s like a doctor who is treating a patient, gives him medicine, and then somehow discounts the impact of his treatment to claim that he is still sick in order to keep providing higher doses. The point here isn’t to argue for or against the welfare state, or even to address the question of whether it has historically been responsible for the reduction of poverty we have seen. Rather, it is to dispel any fears that mass unemployment will lead to people being worse off. Even if a huge portion of humanity ends up useless, government will be willing and able to take care of them.
Historical trends on government spending and poverty alleviation show the problem with the horses analogy mentioned above. Humans never prioritized keeping horses alive. If that was a major goal of public policy, if governments and other institutions cared about the horse population as much as they do poor humans, we could afford a lot more horses now than we had in the twentieth century, all living lives of luxury. In fact, the average horse most certainly lives a better life than it did a century or two ago. Here the analogy of humans to horses might be a reason to be optimistic, as many would think that a world where population numbers declined but the typical person was healthier and better fed wouldn’t be a terrible scenario. Of course, unlike horses, we’re in control of our own breeding, but we seem to be headed in the same direction as our equine friends, with smaller numbers but a higher average quality of life. See also Maxwell Tabarrok on this analogy.
Of course, economic growth doesn’t simply increase government spending. It by definition makes people wealthier, which means that humans become more and more able to indulge their arbitrary preferences. A half century ago, if you wanted a pair of shoes, you’d maybe go down to a local store and see what they had, or at best browse the Sears catalog. Today, there are literally millions of options available to the consumer. To get a shoe of a different shape or color is practically costless. People indulge their aesthetic preferences more and more, and if there is even a weak preference for humans as the providers of goods and services, that will be something that the relatively well-off will be able to indulge. The rich might benefit more, but, barring some kind of doom scenario, everyone will be better off.
Another possibility people worry about is that putting everyone on welfare would make them listless and potentially violent. Again, there is no historical evidence for this. As the welfare state has grown and many young men have ended up supported by their parents or the government, developed countries remain at historically low levels of instability and political violence. Yes, populists make a lot of noise, and people express anger at the ballot box, but there is no wide scale mass unrest practically anywhere across the richest nations. We sometimes get events like January 6, BLM protests, and stories about Elon and his young friends trying to break into government databases, but such occurences are not harbingers of civil war, or anything close. In 2020, Peter Turchin got a lot of attention for saying that the US was heading towards civil war based on a bunch of silly graphs he made up, and I wrote an op-ed in the Washington Post saying that this would not happen. I was proved correct in retrospect, and we don’t hear much from Peter Turchin anymore. Welfare appears to anesthetize more than it causes unrest, and societies rich enough to provide wide scale welfare can usually afford to spend enough on law enforcement and intelligence agencies to keep violence in check. Remember, we’re imagining a much wealthier society in the case that AI threatens to take all the jobs, and the historical trend has been for wealth to be associated with a lower rather than higher chance of mass unrest. It’s is extremely difficult to imagine widespread violence particularly in a country with universal surveillance and facial recognition technology.
The discussion above assumes that the doomers – i.e., the “we’ll all turn into paperclips” crowd – are wrong. I’m not as sure of that as I am that we don’t have to worry about the employment issue.
Take this exchange.
If we get to a world where a statement like “AI will probably have most/all the wealth” is meaningful, then the analysis I’ve provided here isn’t worth much.
I think AI is going to follow the general trend where technology improves people’s lives and be overall great for humanity. At the same time, I can’t completely dismiss the possibility that we will all be killed or enslaved by this technology. But what I think is close to impossible is a world where all of the following are true.
-
AI automates most or all intellectual work.
-
Humans remain in charge as a general matter; and
-
The majority or even a significant minority of human beings end up worse off or dead.
I think you can get any two of these outcomes, but not all three.
If AI automates most work (1) and humans are in charge (2), then the fruits of growth will be so extreme that they will trickle down, whether through new jobs or government intervention in the forms of welfare and expanded make-work, so (3) cannot be true.
By the same logic, if AI automates most work (1) and humans are worse off (3), then we’ve somehow lost control, so there’s no way (2) can be true.
If humans are still in charge (2) and we are worse off (3), then AI wasn’t as big of a deal and some other catastrophe hit, so (1) cannot be true.
I guess there’s another possibility, which is that humans are in control, but it’s the Chinese Communist Party or some other group we don’t want to be in charge. Yet even in that case, I think that their preference wouldn’t be to harm or eliminate foreigners, but rather to make sure they don’t challenge Chinese supremacy. If ISIS or North Korea first developed super intelligence and could dominate the globe it might be a different story, but thankfully the worst people in the world are unlikely to ever be on the cutting edge of technology.
As a practical matter, this means that Luddites are still the enemy. Society does not need to protect jobs, and as a general matter, if redistribution is your concern then direct welfare is better than make-work. There might be cases where we may keep humans employed through laws and regulations because we want them in certain professions. You might like having a human cop instead of a robocop, and I wouldn’t recommend society force everyone to accept the robocop simply based on us having data showing he’s just as competent at doing the same job. In that case, we are not witnessing what I would consider make-work, since the human given a badge by the government actually makes people feel better and serves a public good. This is different than say opposing the automation of ports because it makes people feel good to let union members have jobs. The preference for human cops can be felt in individuals’ day-to-day lives and is based on human nature, reflecting an outcome we might actually get if law enforcement were privatized, while make-work for unions is simply a group relying on mass ignorance to manipulate the public for its own ends.
One could argue that I’m trying to have it both ways. I denounce government sponsored and mandated make-work, while saying that it is a reason not to worry about AI taking everyone’s jobs. Yet while I would prefer that all make-work either be replaced by free markets or direct welfare payments, its being embedded in policy has to be taken as a well-established part of our society that is not going away anytime soon. It would be a mistake to justify more of it, since I believe that our preferences for humans, our superiority to robots in moving through the world, and government spending are more than enough to ensure that we don’t have to worry about the consequences of mass unemployment. The fact that make-work exists is simply another reason not to worry about a jobspocalypse, but even without consciously advocating for more make-work we’ll be fine.
I hesitate to put percentages here, but what the hell. I would estimate likely outcomes as follows.
-
AI makes the world much better, with a growth in living standards that at the very least matches the pace of what we saw in the postwar boom, up to utopia (70%)
-
AI has the potential to make the world better but we screw it up so badly that any increase in living standards is mostly or completely wiped out by things like more expansive governments, the decay of institutions, and a turn to bad ideas like socialism and populism (15%). Bad events in the future look more like the War in Ukraine than any AI apocalypse scenario, that is, resulting from mistakes that are of the kind leaders have made all throughout history and not clearly and directly influenced by developments in AI.
-
AI kills us all or leads to some other doom or doom-adjacent scenario, like terrorists releasing a bioweapon or some combination of AI-related events like that (14%).
-
AI leads to bad outcomes for humanity in ways that can be directly blamed on conventional concerns about AI taking jobs, and the resulting poverty and political upheaval of that (1%).
As for the policy implications of all this, I’m of the position that alignment is a concern worth worrying about, while the impact on jobs is something we should completely ignore. If your estimate of doom is high enough, you might hope for concerns over jobs to lead to a shutdown of AI even if you think the jobs concern isn’t worth worrying about, since it’s easier to get people riled up about machines doing all the work than it is to explain to them the ideas of Eliezer Yudkowsky.
Yet I think that the arguments about doom are mostly of historical interest at this point. If the reports on the cost of training Deep Seek are real, the cat is already out of the bag. There is practically no way to shut down AI, or at least there isn’t the political will to do so. We can only hope that the alignment problem is solvable and the people at the cutting edge of this technology are proceeding wisely.