This post was originally published on this site.
Will I be out of a job?
That’s a question I’ve been asked more frequently these days.
It’s a reasonable question. After all, with all the latest advancements in artificial intelligence (AI) – and the very clear progress towards artificial general intelligence (AGI) that we’ve been tracking in The Bleeding Edge – it’s natural to feel that our work will just go away.
I’m often even asked this question as it relates to my own job – as an analyst.
Deep Research
Over the weekend, OpenAI announced its latest AI product – Deep Research.
It was odd to drop such a major product release on a Sunday – no doubt as a response to the pressure OpenAI has been receiving… following the release of DeepSeek-R1 from China’s High-Flyer Capital Management. I covered that topic in depth last week in Did The Leaders in AI Get it All Wrong? and The Perfect Short.
Deep Research, which is completely unrelated to DeepSeek, is a completely new, agentic AI, developed by the team at OpenAI. It’s based on a version of OpenAI’s yet-to-be-released o3 model.
At the moment, Deep Research is only available to ChatGPT Pro subscribers, which runs $200 a month.
While that’s out of the price range for most consumers, it’s worth every penny for a business or any researcher, given the incredible productivity boosts made possible with a technology like this. It’s not perfect, for reasons I’ll explain, but it’s an incredible hack to save time and improve efficiency.
What makes Deep Research powerful is that it has been optimized for web browsing and data analysis. It employs chain-of-thought reasoning, which enables it to use a multi-step approach to research and analysis, similar to how us humans would approach a task.
Even when Deep Research finds new data that supports a different direction than the one it is taking, Deep Research can retrace its steps, go back, and start again with the new information.
The implications are significant. Imagine being able to read and digest every research paper, every article, every image, every directly related subject tied to the subject of the research… all in a matter of minutes.
Here’s an example of how Deep Research works. The AI was prompted with: “Compile a research report on how the retail industry has changed in the last three years. Use bullets and tables where necessary for clarity.”
In the example above, after Deep Research is prompted, it actually asks a clarifying question before beginning its research. We can actually see the multi-step “thinking” process happening on the right-hand side.
Those who’ve been tracking AI developments alongside me in The Bleeding Edge every week already see the acute distinction…
First-wave LLM technology – like ChatGPT – utilizes a zero-shot response approach. We provide a prompt. ChatGPT’s response is based on the information from our prompt, along with its pre-trained limited knowledge, and the answer is returned in a matter of seconds.
If we don’t like the response, we simply edit and revise our prompt, and we try again.
This is very different compared to this new agentic workflow. The agentic AI has full agency to do all the work it believes will produce the optimal result.
Deep Research is able to complete a request like the above in a matter of minutes… for something that would take several hours to complete, at a minimum. And it can cover far more ground.
But there are some major limitations.
Human Augmentation
At Brownstone Research, we spend hundreds of thousands of dollars every year accessing proprietary data and information behind paywalls. This is critical in order to access valuable and accurate information not widely available, yet it feeds into our own research and analysis.
I also speak with my vast network every day for insights and information. I closely track what’s happening in private companies working with bleeding-edge technology… in order to understand the most important future trends and technological developments. Much of this information isn’t openly available on the web.
And this is a limitation of a tool like Deep Research:
- Deep Research uses only information that is openly accessible and free on the internet
- Deep Research isn’t accessing information that is “hidden” behind paywalls
- Deep Research can’t discern between information that is made up or politicized
- Deep Research can’t access factually correct information that has been banned or censored by the censorship industrial complex
- Deep Research is still capable of hallucinating (i.e. making stuff up)
- Deep Research has no access to human intelligence, specifically the knowledge and information that is well-known by industry experts but isn’t necessarily written down
While my team and I use AI on a daily basis to support our work, we also are careful about checking where the information comes from. We take nothing at face value.
I can’t tell you how many times we discover that data or information provided by an AI tool came from untrustworthy “free” sources on the web.
This is why my job won’t go away, and also why many jobs won’t disappear. Thinking like that is the wrong framework, and I won’t be retiring any time soon.
The right framework is to recognize that an AI like Deep Research is a tool for human augmentation. It helps us accomplish time-consuming tasks in short periods of time – freeing up vast amounts of time – and improves our own productivity.
It can even be used for simpler tasks that are oriented toward consumers. Let’s say that you’re looking to purchase a new car, and you’d like a specific comparison between models that meet your criteria.
Deep Research can achieve in minutes what would certainly take hours of searching on the internet. And it would perform a more complete job than what a normal consumer would be capable of doing themselves.
For more complex searches, one of the biggest problems is just knowing where to look for information. Deep Research doesn’t have that problem. It can easily go to all available (free) sources of information and pick out what it determines to be the most appropriate, based on the prompt it received.
As an indication of the progress that Deep Research has made over prior models and competing models, OpenAI published its results from Humanity’s Last Exam, one of the more complex AI benchmarks to measure general intelligence.
Humanity’s Last Exam is comprised of more than 3,000 questions assembled by experts in more than 100 subjects. Here is just an example of what two questions look like:
Deep Research scored with an accuracy of just 26.6%, which is about 2X better than its o3-mini model which is very new, and a radical improvement over last year’s GPT-4o model which was just at 3.3%.
Still fairly poor, but what’s important for us to internalize is the rate of improvement over prior models. No, Deep Research isn’t perfect yet, but it is radically better than something that existed just a few months ago.
And when AI models can achieve 90%+ on a benchmark like Humanity’s Last Exam, they will have achieved general intelligence… and thus become insanely useful.
For those who might like to have a deeper look at what Deep Research is capable of doing today, you can go right here to see an example output of Deep Research when prompted with the following:
Research consensus on how microplastics affect the human body – how it affects children, young adults, middle-aged adults, and so on. Also, detail the mechanism with which it affects the human body. How to reduce microplastics in the environment and how to eliminate already stored microplastics.
It’s a very useful and comprehensive answer to an interesting topic. We can easily see how useful that would be to any researcher or curious user.
The Trigger to Widespread Adoption
OpenAI, while still heavily biased, is still on an extraordinary path of technological development.
The release of DeepSeek, and the false information about its development costs, may have raised the question about whether or not OpenAI (and Microsoft) are spending too much money developing these frontier models…
But what’s unique about this moment in time is that it isn’t just a wild research and development project. The speed at which OpenAI has been able to monetize its AI has been remarkable.
Who would have thought that the subscription revenue for the use of artificial intelligence would grow at speeds that make social media look slow?
Just imagine how quick adoption will be when an agentic AI is released for consumers for free, or at a nominal cost, to help complete their daily tasks and save us hours a day…
There is no question in my mind that adoption will be even faster than what is shown above. From zero to a billion users in a matter of months.
Jeff
Want more stories like this one?
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.