Sign me up for more of what Gary Marcus is selling in Dario Amodei, hype, AI safety, and the explosion of vibe-coded AI disasters.
In the hands of very skilled practitioners who pay a lot of attention, and treat the outputs with considerable scrutiny, coding agents can be astonishing. But that kind of expert knowledge, which coding agents can’t be counted on for, is exactly why we need to keep software engineers in the loop. And it is why most of us who are in the know have such a hard time taking Amodei’s hype about getting rid of software engineers (who look at the whole problem, and not just isolated bits of code) seriously.
There’s so much about this post that aligns with the things I’m encountering and thinking about recently. I appreciate the Gary put into words a lot of what I’m feeling. It’s this idea of “expert knowledge” that I keep returning to.
I’m not sure some people, some of who I know are heavily using generative AI and coding agents, fully grasp how unreliable LLMs are. I’ve heard people at work using them in domains they don’t fully understand. I’ve seen students using them for classes they don’t care to understand. There are countless YouTube videos hyping things people “with no knowledge” can do now. This is clearly a mistake. LLMs are best when paired with expert knowledge, where the expert verifies their output.
My younger daughter is in college now. She’s studying math, engineering, and computer science. I’ve given her one piece of advice, even as AI is advancing all around her. That advice: be the expert. Don’t cede control. Understand what you’re doing as you leverage LLMs, and you—and the machine—will be just fine.