Every now and then a post comes along that says everything I would want to say about a subject. Michael Taggart’s I used AI. It worked. I hated it. is almost exactly that kind of post. I say almost, because clearly, I’m sitting here writing another post to add my own perspective. His post is good, though. So good! It’s well organized and covers so many points I would want to make. It will probably have people on both sides of the AI-for-coding debate complaining, which I take as a sign that the post is doing something right.
One of the things I like most is that the post has a very AI-works-but-it’s-not-perfect-yet view. So many people today are either boomers or doomers. They either write about AI coding as if they’ve discovered computers for the first time, or they act like it can’t do anything right. My experience is more like the one Taggart describes in his well documented post.
Well, the thing works. The code is in production today, serving certificates for TTI. The only direct changes I made to the codebase were for elegance. The core logic was solid from the jump, owing I believe as much to Rust's safeties in development as to the model's capabilities.
But then he notes:
Did the model hallucinate? Yes, albeit rarely and with self-correction. A handful of times it made up methods for a struct in one of the libraries or another. However, Rust's error messages from the LSP server and compilation checks coerced the model to recheck its work, leading to correct implementation. I did not intervene in this process. It took about five minutes per issue.
Hallucinations still happen for me, too. A lot actually, even though developers I work with say it never or rarely happens to them. I’m working in Python and JavaScript, so I do have to manually intervene, unlike Taggart. He makes a compelling case for using Rust if you’re going to be using coding agents. I just appreciate that he’s being fair here, calling out how cool and unique this is, while also being clear about its issues.
Regardless of how good (or not!) LLM-assisted coding is, I end up in a place pretty similar to Taggart.
If I could disinvent this technology, I would. My experiences, while enlightening as to models' capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don't think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain.
It’s this “cause more harm than good” where I think I sit these days. For me, the harm is in forcing devs down a productivity-at-all-costs path. It’s hustle culture at the expensive of building deep expertise. Practically, there’s a time and a place for moving fast and for being contemplative. AI hype has gotten so strong that you would think it’s only moving fast that matters.
I get it, though. I really do! The reason that contemplative, thoughtful coding is not seen as something to value is because most developers don’t work on things they care very much about. Devs want to get their code done and move on to the next thing. This is an indictment of our industry more than anything that speaks to the importance of LLM-assisted programming.
I end up in a pretty similar place to Taggart. Like him, I think coding assistants are here to stay. I don’t expect things to look that same a year or two from now. There will certainly be a correction. When there is, deep expertise about both writing and understanding software will be a valuable skill to have. If anyone asks me, I would say hold on to that, no matter how you feel about LLMs and agentic coding.