The Sunday edition #511: GPT-5 roadmap; AI politics; new GLP-1s – Exponential View

This post was originally published on this site.

Hi, it’s Azeem.

I’m back from a busy week in the Bay Area meeting with founders, investors and builders. The Bay is way ahead of the rest of the world yet again. It feels like a different world. I will be spending more time there in the coming months. OK, now onto the latest weekend edition: GPT-5 is coming, GLP-1 drugs are evolving and AI agents have a job board. Let’s go!

Today’s edition has an audio component to it. We’re working with the team behind PocketPod to bring you this newsletter in the form of a conversation with me. The conversation is AI-generated, but the meaning behind it is real. Let us know what you think in the comments!

Sam Altman confirmed OpenAI’s near-term release of GPT-4.5 (codenamed “Orion”), calling it the “last non-reasoning model”. GPT-5 will arrive as a “meta” model within months – dynamically deciding how much reasoning power or specialisation to use on the fly. Pricing will tier by “intelligence level”.

Rather than just scaling capabilities, OpenAI seems focused on integrating all its different models and features together. Some think this is proof that the company has run out of ways to boost sheer model performance, but personally, I disagree; there’s still room to scale more sophisticated reasoning.

OpenAI’s latest reasoning model – o3 – has shown impressive results ranking among the top 200 competitors in CodeForces, a popular programming contest. The catch is that you don’t always need that level of heavyweight computation for every problem. This is where OpenAI’s new focus comes in: figuring out how to let each model dynamically assign just the right amount of reasoning for a given task, rather than running at “top 200 in the world” intensity all the time. 

This adjustable intelligence level could become a design pattern. Anthropic’s forthcoming AI model will similarly be a hybrid model that can switch between deeper reasoning and instantaneous responses. Developers will be able to use a slider to select the level of intelligence they want – indicated by the number of tokens that the model will be able to reason with. It is expected to outperform OpenAI’s models at practical coding tasks, which has been a consistent strength for Anthropic.

See also: 

  • Open AI is getting close to launching its own chip.

  • Elon’s hyping up Grok-3 as “scary smart” and weeks away from launch. Leaked rumours from a (now former) employee says it is still behind OpenAI’s reasoning models on coding.

Large language models may be developing coherent “value systems” as they scale, rather than merely reflecting human biases – and they seem to lean towards what we used to consider the American left. A couple of observations here: I think there isn’t a robust political theoretic basis for this analysis, any more than “The Political Compass” is anything more than a parlour game. 

It’s weirdly parochial with a tip of the hat to the culture wars, to map AI systems to today’s political positions, rather than a more robust persistent framing of values (perhaps from the World Values Survey or something similar). What is interesting, is the notion that AI systems may converge on a similar set of values.