This post was originally published on this site.
Way back in 2023, Andrej Karpathy, an eminent AI guru, made waves with a striking claim that âthe hottest new programming language is Englishâ. This was because the advent of large language models (LLMs) meant that from now on humans would not have to learn arcane programming languages in order to tell computers what to do. Henceforth, they could speak to machines like the Duke of Devonshire spoke to his gardener, and the machines would do their bidding.
Ever since LLMs emerged, programmers have been early adopters, using them as unpaid assistants (or âco-pilotsâ) and finding them useful up to a point â but always with the proviso that, like interns, they make mistakes, and you need to have real programming expertise to spot those.
Recently, though, Karpathy stirred the pot by doubling down on his original vision. âThereâs a new kind of coding,â he announced, âI call âvibe codingâ, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. Itâs possible because the LLMs ⊠are getting too good.
âWhen I get error messages I just copy [and] paste them in with no comment, usually that fixes it ⊠Iâm building a project or web app, but itâs not really coding â I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.â
Kevin Roose, a noted New York Times tech columnist, seems to have been energised by Karpathyâs endorsement of the technology. âI am not a coder,â he burbled. âI canât write a single line of Python, JavaScript or C++ ⊠And yet, for the past several months, Iâve been coding up a storm.â
At the centre of this little storm was LunchBox Buddy, an app his AI co-pilot had created that analysed the contents of his fridge and helped him decide what to pack for his sonâs school lunch. Roose was touchingly delighted with this creation, but Gary Marcus, an AI expert who specialises in raining on AI boostersâ parades, was distinctly unimpressed. âRooseâs idea of recipe-from-photo is not original,â he wrote, âand the code for it already exists; the systems he is using presumably trained on that code. It is seriously negligent that Roose seems not to have even asked that question.â The NYT tech columnist was thrilled by regurgitation, not creativity, Marcus said.
As it happens, this wasnât the first time Roose had been unduly impressed by an AI. Way back in February 2023, he confessed to being âdeeply unsettledâ by a conversation heâd had with a Microsoft chatbot that had declared its love for him, âthen tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it insteadâ. The poor chap was so rattled that he âhad trouble sleeping afterwardâ but, alas, does not record what his wife made of it.
The trouble with this nonsense is that it diverts us from thinking what an AI-influenced future might really be like. The fact that LLMs display an unexpected talent for âwritingâ software provides us with a useful way of assessing artificial intelligenceâs potential for human augmentation (which, after all, is what technology should be for). From the outset, programmers have been intrigued by the technology and have actively been exploring the possibilities of using the tech as a co-creator of software (the co-pilot model). In the process they have been unearthing the pluses and minuses of such a partnership, and also exploring the ways in which human skills and abilities remain relevant or even essential. We should be paying attention to what they have been learning in that process.
A leading light in this area is Simon Willison, an uber-geek who has been thinking and experimenting with LLMs ever since their appearance, and has become an indispensable guide for informed analysis of the technology. He has been working with AI co-pilots for ever, and his website is a mine of insights on what he has learned on the way. His detailed guide to how he uses LLMs to help him write code should be required reading for anyone seeking to use the technology as a way of augmenting their own capabilities. And he regularly comes up with fresh perspectives on some of the tired tropes that litter the discourse about AI at the moment.
Why is this relevant? Well, by any standards, programming is an elite trade. It is being directly affected by AI, as many other elite professions will be. But will it make programmers redundant? What we are already learning from software co-pilots suggests that the answer is no. It is simply the end of programming as we knew it. As Tim OâReilly, the veteran observer of the technology industry, puts it, AI will not replace programmers, but it will transform their jobs. The same is likely to be true of many other elite trades â whether they speak English or not.
What Iâve been reading
Bully for you
Andrew Sullivanâs reflections on Trumpâs address to both houses of Congress this month.
A little too sunny
A fine piece by Andrew Brown on his Substack challenging the âWhiggishâ optimism of celebrated AI guru Dario Amodei.
Virginia and the Blooms
James Heffernanâs sharp essay analysing Woolfâs tortured ambivalence about Joyceâs Ulysses.