All too many employers are placing their businesses at strategic risk by cutting headcount before being sure AI can perform the tasks they need it to, an AI expert has warned.
Shomron Jacob is head of Applied Machine Learning and Platform at enterprise AI applications platform provider, iterate.ai. He believes the big question organizations should ask themselves today is whether they are dealing with job cuts “strategically or reactively”. He explains:
The pattern is real: companies are restructuring workflows around AI capabilities, which inevitably changes headcount requirements. The critical question isn’t whether this is happening, but whether companies are doing it strategically or reactively. From what I’ve seen evaluating enterprise AI strategies, most organizations are making these decisions without proper readiness assessments. They’re cutting roles before they’ve validated that AI can actually perform those functions reliably.
But this approach will inevitably create “strategic risk” for the business, Jacob warns:
We’re going to see a wave of regret [due to premature redundancies] similar to what the Orgvue research suggests. Companies that eliminate expertise before building proven AI capability end up with skills gaps, institutional knowledge loss, and failed automation initiatives that cost more than the headcount savings.
Matthew Baden, Managing Director for Technical at tech recruitment consultancy The Search Experience, agrees:
A lot of companies rushed to replace people with AI to capture quick wins, only to find that current models still produce fairly generic output that needs significant human oversight. When you cut experienced people too quickly, you lose tenured knowledge and the ability to handle edge cases – the exact areas where AI struggles to keep up. We’re already seeing some quiet regret and selective rehiring. AI works best when it amplifies strong people, not when it replaces them outright.
As a result, he believes that most job roles are more likely to be re-defined rather than disappear entirely, especially if they combine technical work with judgment, context, or customer insight.
Tackling under-estimated risks
The upside of this situation, Baden says, is that companies will be able to operate leaner teams with better output per person. The downside is that if cuts are made too quickly, institutional knowledge disappears and employees end up dealing with a lot of AI-generated output that still needs fixing.
Loss of institutional knowledge and skills gaps are, in fact, “the most underestimated risks” when employers undertake AI-based re-structuring, Jacob indicates:
When you eliminate experienced staff, you don’t just lose their task execution. You lose their pattern recognition, their understanding of edge cases, their ability to detect when something is wrong. AI systems don’t develop intuition about ‘this answer seems off’ the way experienced humans do. The companies experiencing regret are predominantly those that treated AI deployment as a headcount reduction exercise rather than a capability transformation project. They optimized for short-term cost savings rather than long-term system reliability and performance.
This pattern of regret, meanwhile, follows a clear sequence:
- Initial excitement about cost savings
- Deployment without adequate piloting
- A gradual realization that the quality of AI output is inconsistent
- Discovery of critical errors that humans would have caught
- Skills gaps that become apparent when trying to fix problems
- Recognition that institutional knowledge has now gone.
Speculation dressed up as transformation
As for the kinds of jobs most likely to be affected by this situation, Jacob indicates it is not as simple as saying ‘automatable tasks will be eliminated’. Instead, he points to three key categories of roles employers need to think about:
- Most vulnerable to replacement: This refers to jobs that involve repetitive information processing without nuanced judgment, such as data entry and basic content moderation. These roles are “being automated rapidly, but often poorly”, he says. The systems replacing them frequently produce so-called ‘AI slop’.
- Subject to being re-defined: These jobs involve knowledge work that combines pattern recognition with judgment, such as software development and financial analysis. Here, AI augments rather than replaces humans, but the role fundamentally changes in nature. A financial analyst, for instance, becomes an ‘AI-assisted analyst’, who validates and refines machine output rather than builds models from scratch.
- Least vulnerable to replacement: This category covers roles requiring complex human judgment, creative strategy and relationship-building skills, or the ability to handle novel situations. Ironically, Jacob says, positions like customer success are harder to automate than certain ‘high skill’ analytical roles as they require contextual human judgment.
But he indicates that a big danger today lies in employers replacing roles that should actually be re-defined. He explains:
You can’t just eliminate analysts and have AI do their job. You need different analysts who can evaluate AI output, catch hallucinations, and maintain institutional knowledge. Companies that miss this distinction end up regretting [having made staff redundant].
As a result, while Jacob believes a “permanent shift toward AI-augmented work” is taking place, he also forecasts:
Significant near-term volatility as companies learn the hard way which roles AI can actually handle versus which require human expertise…From enterprise evaluations I’ve conducted, I’d estimate fewer than 20% of companies making AI-driven headcount decisions have actually validated that their AI systems can perform at the required reliability and safety levels. That’s not an AI-first strategy. That’s speculation dressed up as transformation.
As a result, in his view, the winners will be those organizations that focus less on “cost reduction through replacement” and more on workforce transformation supported by a suitable investment in re-skilling.
Taking a strategic approach to AI implementation
To take a strategic rather than reactive approach to implementing AI, meanwhile, requires employers to be thoughtful and measured in how they approach change. This includes headcount cuts. As Baden says:
The key is to treat it as a proper team redesign, not just cost-cutting dressed up as AI strategy.
That means:
-
Separating repetitive work from tasks that require human judgment
-
Re-skilling and redeploying people early, rather than cutting jobs first
-
Hiring for strong fundamentals and adaptability, not just ‘AI experience’
-
Testing tools in real life workflows before making major decisions.
It also means not:
- Cutting headcount too deeply before an AI tool is ready to replace tasks
- Focusing too much on job candidates with specific AI experience as opposed to strong performers who can learn quickly and work in an ambiguous context
- Treating AI purely as a means of cost-cutting.
The most successful approach Jacob has seen, on the other hand, is to ‘pilot before cutting, validate before scaling, and reskill during the transition’. In other words:
-
Identify which roles AI can genuinely help to augment or replace. But validate your findings through pilot projects with measurable performance metrics instead of vendor demos. Most such demos showcase best-case scenarios rather than test the infinitely more valuable worst-case scenarios and edge cases.
-
Create evaluation frameworks before re-structuring and ask yourself these questions before you get rid of people who currently ensure quality control: What is an acceptable error rate? How will you detect it when your AI systems fail? What human oversight is required?
-
Treat your initiative as workforce transformation rather than simply a headcount-cutting exercise. Redeploy people into AI evaluation, governance, and oversight roles. The irony, Jacob says, is that the more capable AI systems become, the more – rather than less – skilled workers are needed to validate their output. But required skillsets shift from one of ‘doing the task’ to evaluating if AI did it correctly instead.
How to survive the transition
Other key considerations that are all too often forgotten about here, Jacob believes, are change management, governance, and the use of sound evaluation systems. For instance, he says, most organizations introduce their AI systems without suitable guardrails, including which decisions they can make autonomously and which require human approval. Such guardrails only tend to emerge after an incident has occurred.
Another common challenge is skills mis-matches. The problem here is that companies may choose to sack the employees who used to do the work. But they subsequently often find that new people are required to evaluate if the AI is performing its assigned tasks correctly.
A third widespread problem is that many organizations are unable to measure the performance of their AI systems reliably. This is because they have no frameworks for measuring AI hallucination rates or the quality of the tools’ decision-making.
But as Jacob points out:
The key to success is found in the ordinary and unexceptional: run proper pilots, build evaluation frameworks, establish governance before deployment, and invest in re-skilling. Companies that skip these steps to move fast invariably pay for it later through failed deployments, quality issues, and the costs of rebuilding institutional knowledge they eliminated prematurely.
As to what the next 12 to 18 months are likely hold though, Jacob expects to see a “reckoning between AI hype and AI reality in production environments”. This is because the gap between what AI systems can do in controlled demos and messy production environments is “substantial”, with many companies about to “discover this the hard way”.
He also anticipates some companies starting to “quietly rehire” for roles they previously cut too aggressively, particularly in those cases where AI systems have underperformed against expectations or had quality issues. This, he says, will not be framed as ‘we were wrong about AI’. Instead, it will be pitched as ‘evolving our AI strategy’ or moving to ‘hybrid human-AI models’.
As Jacob concludes though:
The longer-term trend is toward AI-augmented work rather than wholesale replacement but with significant near-term volatility as the market separates hype from capability. Companies that survive this transition successfully will be those that treated AI deployment as a strategic capability transformation requiring readiness assessment, governance, and change management, and not just as a cost-reduction exercise.
It has been said before (repeatedly) but I’ll say it again: those organizations that focus on cost-cutting and see AI as an easy means of getting rid of headcount may live to regret their hasty decisions. As Jacob points out, the secret to real success going forward lies in investing in reskilling to support a broader workforce transformation.