Anderycks.Net by Deryck Hodge logo

Pragmatism and AI Coding Tools

Over the past few months, the way software engineers talk about AI has started to change. I hear it in Slack threads at work and conversations with other developers. I read it more and more in other developers' blog posts. For a while, the conversation was only polarized. Some were convinced, and some still are, that large language models will replace programmers entirely. Others dismiss the hype. Recently the tone feels more practical, even among skeptics.

Glyph, who has written critically about LLMs, recently published a post asking What Is Code Review For? He's so spot-on about code review, btw – go read that post! – but then drops a surprise. The post is actually about LLMs!

He writes:

Sigh. I’m as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them.

I find my own point of view similar to Glyph's:

My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you’re going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.

That's kind of where I am. I have my own qualms about this. Anyone paying attention probably should. These systems raise real questions about what it means to write code, how engineers develop skill, and what our work might look like a decade from now. Never mind that the people most pushing this stuff are just creeps.

But still, I’m pragmatic. If these tools are going to be part of our future, then the responsible thing to do is to understand them, and understand well enough, that we can decide how best to ethically and humanely incorporate this technology into our lives.