Sam Altman on how life will change once AGI steals your job – and what ChatGPT thinks about it

This post was originally published on this site.

Sam Altman on ChatGPT AGI, AI agents, and the future of jobs

Click to Skip Ad

Closing in…

Published Feb 10th, 2025 6:24PM EST

Image: OpenAI

Reading a blog post from OpenAI CEO Sam Altman on this particular Monday in February makes perfect sense, considering what’s happening in the world right now. The AI Action Summit in Paris has world leaders and tech execs in attendence, discussing AI’s future and potential regulation needed to safeguard the space.

Sam Altman penned a blog post titled Three Observations, sharing a mission statement for the future of ChatGPT and other OpenAI technology, with a clear focus on AGI (Artificial General Intelligence). The CEO gives us his incredibly optimistic view of what AGI and AI agents will mean for the world in the near and more distant future and what life might be like once AGI and AI agents steal your jobs.

The mission statement came at the end of an incredibly important period for OpenAI, and it was all the more important considering the headwinds OpenAI had to face.

In the past few weeks, the company released the first AI agents (Operator and Deep Research) and made available two ChatGPT o3 models, all part of a massive money-raising campaign that proved successful. That’s despite the ongoing tradition of ChatGPT safety researchers leaving the company or the unexpected competition from Chinese rival DeepSeek.

I wondered what OpenAI’s main creation would think about the blog, so I went to ChatGPT (GPT-4o) to ask the AI how it felt about the blog post. As expected, the AI recognized the mission statement is about technologies similar to itself, without having any feelings about it. ChatGPT also highlighted concenrs with Altman’s line of carefully edited thinking.

Altman’s view of the post-AGI world

Altman started the blog by explaining AGI after making it clear that OpenAI’s mission is to ensure that AGI benefits humanity. As you’re about to see, the exec didn’t offer a perfectly objective explanation for AGI or what AGI means from the Microsoft-OpenAI business relationship:

Systems that start to point to AGI* are coming into view, and so we think it’s important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.

Altman then explained the rapid progress in AI development, indicating that the cost of an AI product tends to fall by ten times every 12 months, leading to increased usage. GPT-4 prices from early 2023 dropped by about 150 times by the time ChatGPT reached the GPT-4o model in mid-2024.

The CEO also made it clear that OpenAI won’t stop investing in AI hardware in the near future, which is likely a needed remark in a post-DeepSeek world. A few weeks ago, the Chinese AI stunned the world with its ChatGPT-like abilities obtained at much lower costs.

ChatGPT Deep Research is a new AI agent that can research the web for information. Image source: OpenAI

All these AI developments will lead to the next phase of AI evolution, including AI agents, towards the age of AGI. That’s where Altman gave an example of AI agent working as a software engineer:

Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others.

Altman didn’t say this engineer would take the job of a human, but he might have just as well said it. Imagine millions of AI agents taking over jobs in countless fields:

Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

Yes, that’s a nightmare scenario to some people, and it’s easy to understand why, even though Altman paints an overall rosy picture of what’s coming ahead and downplaying the bad side effects. Altman said the world won’t change immediately this year, but AI and AGI will change in the more distant future. We’ll inevitably have to learn new ways of making ourselves useful (read: work) once AI takes over:

The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.

But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today. 

Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness and enable individual people to have more impact than ever before, not less.

Altman also mentioned that the impact of AGI will be uneven, which is probably a massive understatement. He also explained how day-to-day life might change people:

The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically.

He said the road for OpenAI “looks fairly clear,” but it depends on public policy and collective opinion.

Altman also mentioned there “will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI,” without disclosing what they would be.

It is reassuring to see Altman talk about AGI safety, but as a ChatGPT user myself, I’d want more specifics. Altman did mention the need to empower individuals with AI rather than having it used by authoritarian regimes for mass surveillance and loss of autonomy.

OpenAI’s ChatGPT Operator AI agent. Image source: OpenAI

Altman also noted that “ensuring that the benefits of AGI are broadly distributed is critical,” but that seems an unlikely goal for any product, including AGI and AI agent versions of ChatGPT. The cost of access might be dropping, but that might not be enough for all people to use AI.

In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

In closing, Altman said that anyone in 2025 “should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.”

“There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all,” Altman concluded.

What ChatGPT thinks about it

Hopefully, the road to AGI and the massive AI-related economic transformations that will impact society in the coming years will be easy to handle. But there’s no denying that the future Altman describes will move a lot of jobs to AI.

It’s something I’ve been considering all along. It’s why I told you a few days ago to start using ChatGPT even if you fear or despise AI. Your future job might depend on knowing how to handle and talk to AI. Altman isn’t wrong about human creativity and supervision being important in a world of AGI and AI agents.

ChatGPT recognizes itself in Sam Altman’s blog. Image source: Chris Smith, BGR

I fed Altman’s blog to ChatGPT and asked it whether it thought the mission statement was about itself. As you can see in the screenshot above, ChatGPT recognized itself in Altman’s writing.

“While it doesn’t directly call me out, it’s essentially describing the trajectory I’m on—becoming more useful, widespread, and integrated into society,” the chatbot told me in what’s practically a summarization of Altman’s musings.

I then asked the AI if it liked what the blog said about its future, fully knowing that ChatGPT can’t “like” or have feelings about anything. ChatGPT didn’t surprise me, but the AI did give me the pros and cons of what Altman talked about in his AGI sale pitch.

ChatGPT analyzing the good about its future. Image source: Chris Smith, BGR

ChatGPT gave me the good and the bad of AGI and AI agents taking over the world. He identified the potential strengths of better AI versions of itself, as outlined by Altman, and the impact smarter, cheaper, and widely accessible AI can have on humanity.

ChatGPT also mentioned the concerning issues with Altman’s carefully worded AI vision. The AI mentioned issues like over-reliance on AI, saying that it “exists to augment human intelligence, not replace it.” ChatGPT also identified Altman’s downplaying of the risks related to AI stealing jobs and giving certain individuals and organizations more power. Interestingly, ChatGPT also rhetorically asked, “Who decides how AGI is used? Will it empower individuals or serve corporate/government interests?”

Here are the concerning parts about the future of AI. Image source: Chris Smith, BGR

The chatbot’s final take is also interesting and fits well with what’s happening in Paris this week. AI has the potential to be revolutionary if done right, but Altman’s blog is overly optimistic:

The blog paints a very optimistic picture of AI’s future, but it sidesteps the risks and challenges that come with it. If AGI is developed with the right policies, ethics, and accessibility, it could be revolutionary. But if mishandled, it could create serious societal imbalances.

In short: The potential is amazing, but the path needs careful guidance.

ChatGPT doesn’t have genuine opinions; no generative AI does. We’ll have to wait for AGI and beyond for that. However, there’s good news in the way OpenAI created ChatGPT. It’s an AI that can handle all sorts of questions and provide answers that would be critical of its creators, as seen above. That’s already something you’d want from AI and something products like DeepSeek, with their built-in real-time censorship features, can’t offer.

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.

Chris Smith’s latest stories