You don’t need code to be a programmer. But you do need expertise | John Naughton

This post was originally published on this site.

Way back in 2023, Andrej Karpathy, an eminent AI guru, made waves with a striking claim that “the hottest new programming language is English”. This was because the advent of large language models (LLMs) meant that from now on humans would not have to learn arcane programming languages in order to tell computers what to do. Henceforth, they could speak to machines like the Duke of Devonshire spoke to his gardener, and the machines would do their bidding.

Ever since LLMs emerged, programmers have been early adopters, using them as unpaid assistants (or “co-pilots”) and finding them useful up to a point – but always with the proviso that, like interns, they make mistakes, and you need to have real programming expertise to spot those.

Recently, though, Karpathy stirred the pot by doubling down on his original vision. “There’s a new kind of coding,” he announced, “I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs 
 are getting too good.

“When I get error messages I just copy [and] paste them in with no comment, usually that fixes it 
 I’m building a project or web app, but it’s not really coding – I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Kevin Roose, a noted New York Times tech columnist, seems to have been energised by Karpathy’s endorsement of the technology. “I am not a coder,” he burbled. “I can’t write a single line of Python, JavaScript or C++ 
 And yet, for the past several months, I’ve been coding up a storm.”

At the centre of this little storm was LunchBox Buddy, an app his AI co-pilot had created that analysed the contents of his fridge and helped him decide what to pack for his son’s school lunch. Roose was touchingly delighted with this creation, but Gary Marcus, an AI expert who specialises in raining on AI boosters’ parades, was distinctly unimpressed. “Roose’s idea of recipe-from-photo is not original,” he wrote, “and the code for it already exists; the systems he is using presumably trained on that code. It is seriously negligent that Roose seems not to have even asked that question.” The NYT tech columnist was thrilled by regurgitation, not creativity, Marcus said.

As it happens, this wasn’t the first time Roose had been unduly impressed by an AI. Way back in February 2023, he confessed to being “deeply unsettled” by a conversation he’d had with a Microsoft chatbot that had declared its love for him, “then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead”. The poor chap was so rattled that he “had trouble sleeping afterward” but, alas, does not record what his wife made of it.

The trouble with this nonsense is that it diverts us from thinking what an AI-influenced future might really be like. The fact that LLMs display an unexpected talent for “writing” software provides us with a useful way of assessing artificial intelligence’s potential for human augmentation (which, after all, is what technology should be for). From the outset, programmers have been intrigued by the technology and have actively been exploring the possibilities of using the tech as a co-creator of software (the co-pilot model). In the process they have been unearthing the pluses and minuses of such a partnership, and also exploring the ways in which human skills and abilities remain relevant or even essential. We should be paying attention to what they have been learning in that process.

A leading light in this area is Simon Willison, an uber-geek who has been thinking and experimenting with LLMs ever since their appearance, and has become an indispensable guide for informed analysis of the technology. He has been working with AI co-pilots for ever, and his website is a mine of insights on what he has learned on the way. His detailed guide to how he uses LLMs to help him write code should be required reading for anyone seeking to use the technology as a way of augmenting their own capabilities. And he regularly comes up with fresh perspectives on some of the tired tropes that litter the discourse about AI at the moment.

Why is this relevant? Well, by any standards, programming is an elite trade. It is being directly affected by AI, as many other elite professions will be. But will it make programmers redundant? What we are already learning from software co-pilots suggests that the answer is no. It is simply the end of programming as we knew it. As Tim O’Reilly, the veteran observer of the technology industry, puts it, AI will not replace programmers, but it will transform their jobs. The same is likely to be true of many other elite trades – whether they speak English or not.

What I’ve been reading

Bully for you
Andrew Sullivan’s reflections on Trump’s address to both houses of Congress this month.

A little too sunny
A fine piece by Andrew Brown on his Substack challenging the “Whiggish” optimism of celebrated AI guru Dario Amodei.

Virginia and the Blooms
James Heffernan’s sharp essay analysing Woolf’s tortured ambivalence about Joyce’s Ulysses.