Former Prime Minister – and now adviser to the AI company Anthropic – Rishi Sunak’s warning that AI is negatively affecting graduate recruitment should not come as a surprise. Employers increasingly acknowledge what the data suggests: businesses expect to grow output without growing headcount. “Flat is the new up,” as Sunak puts it. For graduate roles, that means fewer people doing more, with some entry-level professional work compressed, automated, or removed.
For graduates entering the labour market, that is a real threat. But the problem is not only that there may be fewer graduate jobs in the labour market, it is that AI may remove or dilute the tasks through which graduates traditionally learn how to become professionals.
Many of the tasks being exposed to automation today are the low-risk activities through which graduates learn how a profession works. Drafting a memo, checking a calculation, summarising evidence, reviewing a contract, debugging code, preparing market research, or producing a first analysis all may look routine. But these tasks teach graduates the grammar of professional judgement. They are how we learn what robust and inadequate work looks like, where risk sits, and when confidence is not justified.
These developments are already causing changes among students’ and early-career candidates’ behaviour. A recent Prospects Luminate survey found that 13 per cent had altered their career plans because of AI, while a further 34 per cent had considered doing so.
Graduate employment is a training architecture
If those initial career steps disappear, the graduate labour market problem becomes a talent-formation problem. Graduate employment is a training architecture, not just a destination. Graduates undertake bounded tasks, receive feedback, internalise standards, develop judgement, and gradually take on responsibility. If AI automates the bottom steps of that ladder, universities and employers need to think differently about how professional capability is formed.
This is the paradox we as universities now face. The same system that appears educationally as augmentation may appear organisationally as automation. That is, in the classroom, AI can provoke critique, comparison, and deeper understanding. However, in the workplace, the same capability may compress workflows, reduce junior labour demand, and redistribute responsibility to fewer people.
The shift is already visible in hiring patterns. Job adverts listing AI literacy rose 61 per cent in 2024, while PwC’s 2025 Global AI Jobs Barometer puts the wage premium for AI-skilled workers at 56 per cent. But the early-career picture is more complex than simple job destruction. In a recent Institute of Student Employers survey, 53 per cent of employers expected entry-level hiring to remain broadly similar over the next three years, 27 per cent expected increases, and 17 per cent expected reductions. The jobs are changing. More importantly, the route into those jobs is changing.
This distinction matters. The same employer research found that no respondents expected more than a quarter of entry-level roles to be replaced by AI over the next three years. Half expected only a few roles, between one and ten per cent, to be replaced, while 32 per cent expected none to be replaced at all. The more immediate issue is not wholesale replacement, but the reshaping of tasks and responsibilities.
Automation rarely removes work neatly. It changes the shape of work. Some tasks disappear, some are compressed, and others become acts of review, interpretation, and accountability: often termed the “human-in-the-loop”. That may look like upskilling, but for graduates it creates a problem. Reviewing AI-generated work is not a basic checking exercise. It requires the judgement that entry-level tasks used to build. If graduates are asked to supervise AI before having learnt the standards of the field, the result may be fluency without competence.
Beyond the generic
This is why the way universities deal with the disruptive force of AI is so essential.
Assessment redesign is necessary, but it is only the baseline. It may protect academic integrity, but it does not, on its own, prepare students for AI-mediated work. The deeper question is what it means to develop disciplinary expertise when AI can produce fluent answers, draft arguments, generate code, and simulate professional outputs.
The danger is not just students using AI to replace their own work. The more significant danger is that they may mistake output for understanding. Students can now produce essays, reports, design proposals, business analysis, legal summaries, or pieces of code without having developed the depth needed to judge whether it is any good.
Universities therefore need to move beyond generic AI literacy. Prompting, tool access and responsible-use guidance have their place, but they are not enough. AI must be taught through disciplinary standards: what counts as evidence, proof, risk, responsibility, and failure in a particular field.
Surrey’s discipline-specific approach
At the University of Surrey, this is the problem we are committed to solving. From September 2026, we are embedding discipline-specific AI teaching into all of our degree programmes – from foundation to undergraduate to postgraduate. Our design principle is that we are not treating AI as a generic digital skill, a misconduct risk, or a bolt-on employability module. While the subject remains the centre of the degree, our aim is to strengthen disciplinary expertise as well as ensuring students understand how AI is changing knowledge, judgement, and professional practice within their field.
That means asking each subject a different question. What does it mean to use AI well in civil engineering, where safety, regulation and physical constraints matter? What does it mean in politics, where evidence, interpretation, and democratic legitimacy matter? What does it mean in business, where AI-generated confidence can quickly become commercial risk? The answer cannot be the same on every course, because each profession has different standards of good judgement.
In civil engineering, students might use AI to generate competing designs for a low-carbon building – but then have to prove which, if any, is structurally safe, financially viable and environmentally credible. In politics, they might use AI to build rival explanations of an election result, but then test those claims against theory, polling data, and evidence from the campaign. In business, they might use AI to draft a market-entry strategy but then act as the critical decision-maker: identifying false assumptions, missing evidence, commercial risk, and overconfident conclusions.
In all cases, the task is not just to check whether an AI-generated output is right or wrong. It is to understand how AI changes the conditions of judgement within that discipline, including what it makes newly possible.
Used well, AI can help students explore more options, test ideas faster, compare evidence at scale, model alternative scenarios, and challenge their assumptions. These benefits, however, only become educationally valuable if students also learn to recognise the limits: what evidence the system privileges, what assumptions it hides, which forms of expertise it strengthens or weakens, and where accountability sits when something plausible is still professionally inadequate. The aim is not simply to teach students to use AI, but to help them understand how AI changes the work, the risks, the possibilities, and the responsibilities of their future profession.
What is at stake is tacit knowledge: the situated understanding that allows professionals to recognise when an answer is technically correct but practically unusable, when evidence is thin, when assumptions are hidden, or when a model has optimised for the wrong objective. If AI changes the tasks through which that knowledge is acquired, education must redesign the route through which expertise is formed.
Employers also have a part to play in carefully handling the AI disruption. In the short term, AI may look like a way to reduce junior hiring. But organisations that remove too many entry-level opportunities risk damaging their own future talent pipeline. They may gain efficiency now, only to find later that they lack people with the depth, judgement and organisational knowledge needed to take responsibility for complex work.
Surrey’s origins in 1891 as the Battersea Polytechnic Institute were rooted in equipping people with the vocational skills needed for an age of industrial change. The challenge today is different, but the responsibility is familiar. That is why Surrey is redesigning learning for an AI-disrupted world: to ensure our graduates are not simply able to use new technologies, but equipped to create with them, lead through them, and make informed judgements about when and how they should be used. Our ambition is for Surrey graduates to shape the future of their professions, not merely respond to it.