Hey programmers – is AI making us dumber? – The Register

This post was originally published on this site.

Opinion I don’t want to sound like an aging boomer, yet when I see junior programmers relying on AI tools like Copilot, Claude, or GPT for simple coding tasks, I wonder if they’re doing themselves more harm than good.

It’s made doing simple jobs much easier, but it’s only by learning the skills you need for minor tasks that you can master the abilities you must have for major jobs.

I’m not the only one who worries about that. Namanyay Goel, an independent developer, recently wrote a blog post – with more than a million hits – that clearly struck a nerve. Goel wrote:

Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.

Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.

The foundational knowledge that used to come from struggling through problems is just… missing.

We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.

I agree.

I’m not saying you need to learn the skills I picked up in the ’70s and ’80s with IBM 360 Assembler and Job Control Language (JCL). That would be foolish. But, by working with such tools, I grokked how computers worked at a very low level, which, in turn, helped me pick up C and Bash. From there, I wrote some moderately complex programs. I can’t say I was ever a great developer. I wasn’t. But I knew enough to turn in good work. Will today’s neophyte programmers be able to say the same?

I wonder. I really do.

As Goel said: “AI gives you answers, but the knowledge you gain is shallow. With StackOverflow, you had to read multiple expert discussions to get the full picture. It was slower, but you came out understanding not just what worked but why it worked.”

Exactly so. In my day, it was Usenet and the comp newsgroups – yes, I’m old – but at its best, the experience was the same. The newsgroups were made up of people eager not just to address how to solve a particular problem but to understand the nature of the problem.

This isn’t just two people spouting off. A recent Microsoft Research study, The Impact of Generative AI on Critical Thinking, found that knowledge workers with “higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking.” Further, “used improperly, technologies can and do result in the deterioration of cognitive faculties.”

Another study by Michael Gerlich at SBS Swiss Business School in Zurich, Switzerland, also found “a negative correlation between frequent AI use and critical thinking abilities.” Grant Blashki, a professor at the University of Melbourne, agrees.

Blashki wrote: “It’s a simple case of ‘use it or lose it.’ When we outsource a cognitive task to technology, our brains adapt by shifting resources elsewhere – or just going idle. Convenience comes with a cost. If AI takes over too much of our cognitive workload, we may find ourselves less capable of deep thinking when it really matters.”

That’s bad. It’s especially bad when people are still learning how to think in their field. Sure, we get faster answers, but as Blashki noted: “It’s the difference between climbing a mountain and taking a helicopter to the top. Sure, you get the view either way, but one experience builds strength, resilience, and pride – the other is just a free ride.”

Besides, as much as you may want to turn over all your work to an AI so you can get back to watching Severance or The Night Agent, you still can’t trust AI. AI chatbots have been getting better at not hallucinating, but even the best of them still do it. Even with programming, my ZDNet colleague David Gewirtz, who’s been testing chatbots for their development skills for two years, observed: “AIs can’t write entire apps or programs. But they excel at writing a few lines and are not bad at fixing code.”

That’s nice, but it won’t help you when you need to write a complex application.

So, what should you do? Here’s my list:

Don’t treat AI as a magic answer box. Trust, but verify its answers. Use AI results as a starting point. For programming, work out how it’s solving your problem. Consider whether there’s a better way. Look for sites where the smart people are talking about your field of expertise. Ask questions there, answer questions there, study how others are dealing with their problems. Get involved with your colleagues’ professional conversations.

When you do code reviews, don’t stop when the code works. Look deeper into understanding the process.

Last but not least, try coding, writing, or whatever from scratch. Stretch your mental muscles.

Blashki said it best: “The goal isn’t to reject AI – it’s to create a balanced relationship where AI enhances human intelligence rather than replacing it.” ®