I’m at one of those weird places in my life where I’m not entirely sure what I’m working for. In some ways, that’s a good thing. I’ve achieved a level of success and find satisfaction in my day to day work. That success, however, means I have less options when thinking about the future. That makes it harder to know where I’m headed. You need a goal to have meaningful work. What will that be for me? I’m spending some time this year thinking about exactly that question.
The first and most obvious option is the least appealing to me: stay the course, save as much as possible, and retire as soon as I can. I hate that idea and hate the idea of retirement generally. I don’t want to rest, not in any sort of permanent or long-term sense. I don’t want to do nothing. I want to work hard and work for something, quite literally until the day I die. That’s how I’m wired.
Guess that’s off the table, so now what?
I’m an engineering manager, so I could try for a director level position, or maybe eventually a VP role. That’s what managers do. We advance in role and scope, manage more, build little kingdoms within the work hierarchy. That’s not really inspiring for me either. I’ve had some good directors and the occasional good VP, but by and large, they’re all mostly concerned with stuff I don’t think matters much for good software engineering teams. I’m very much in the Apple/Steve Jobs camp who knows that the best managers are great individual contributors who don’t want to be managers.
I also probably just destroyed my chances of any promotion with that last paragraph. But hey, what’s the point of having your own site if you can’t be really honest and truly reflective?
I sometimes think about going back to being an individual contributor. I’ve been both a developer and a manager at various times in my career. I love the act and craft of software engineering. I like to think that I’m good at being a manager because I give people on my team space to focus on building. It’s the joy of building things, the software engineering itself that is the goal for me. I still spend a fair amount of my free time building stuff and thinking about how best to do software engineering. So I could just go do good engineering work myself. There’s an inherent growth there, always learning, always doing something new.
That idea has some appeal, maybe a lot of appeal, but then I would miss out on the stuff I do enjoy about being a manager and a leader. Ugh, you see the problem, right? I’m so conflicted all the time.
I think that kind of conflict is a good thing, which is why I’m posting this internal monologue for others to see. Being conflicted is good, as I said when writing about two truths in understanding AI. I really do want to figure out what I’m working for this year, really set a goal for myself, but maybe it’s not that easy of an answer to find. Maybe it’s not clean and neat for me at this stage in my career. That’s ok, too. The goal itself really is not the goal. It’s the process of finding one that is the way more interesting part of all this for me.
I’m starting to really dislike hustle culture. It starts innocently enough, at least it did for me. GTD was the gateway drug. Then TODO apps. Next thing you know you can’t enjoy anything without optimizing it. One day, you hit the limits of what can be optimized, and suddenly, you have an existential crisis on the order of losing a loved one. I say it’s time we threw the baby out with the bath water, as it were.
What does that mean? For me, it’s about worrying less about results and more about process. If productivity is about ticking off accomplishments, marking things as done, then I want to slow down, be more focused on the thing I’m doing. It’s about being present, about the doing not being done.
As for why, it’s amazing to me how busy we all are and how little actually gets done. For all the tick marks in to do apps, there’s relatively little to show for it. We all feel this viscerally these days, in almost every way. There’s more information than ever, and so little of it is worthwhile and meaningful. There’s more work than ever, and yet companies want to squeeze every drop from their people and then throw them away. Just pure usage.
I feel like business is run this way too. It doesn’t matter any more what you make or how you make it. Only usage. Up and to the right or go away. I guess this is fine if all you care about is money, but then what? You either leave your money to someone else or you leave something of value. I’ve reached the point where I’m tired of the drain on me personally only to have a few dollars to leave behind. I’d rather leave something of value.
Hustle culture is an infection, and doing less is the cure. Do less to do more. That’s my new moto.
There is a growing cultural backlash to AI. I, for one, welcome this change. It’s a sign AI might not be as inevitable as many in my industry would like you to believe.
Many young people recognize AI's role in their future education and careers. Yet their overall sentiment is shifting; excitement is dropping, while the share reporting anger toward these tools is on the rise.
Anger is a strong emotion. It’s not a good sign for AI and the frontier AI labs creating this technology. Both OpenAI and Anthropic are clearly aware and adjusting their messaging. The San Francisco Standard posted an April interview with OpenAI’s global policy chief Chris Lehane.
That people are worried about AI is understandable, Lehane said — they believe it might take their jobs, harm their kids, and raise their electricity bills. He compared the tension to conflicts that followed earlier technological leaps forward, like the invention of the printing press. And it doesn’t help, he said, that the AI industry has made a habit of foreboding pronouncements.
This is an amazing statement. It acknowledges the backlash while also remaining remarkably tone deaf. Who told everyone for months that AI was going to take their jobs? Oh wait, it was OpenAI itself. Also, the arrogance of these folks to compare their own invention to the printing press. Apparently the founder reality-distortion field also applies to historical understanding. It’s not accurate to say there was wide-spread backlash to the printing press. The printing press backlash was within the aristocracy and religious class who wanted to control access to understanding about God and the flow of information. Seems like the frontier labs have more in common with that class than anyone who fears AI.
Either way, the peasants have had enough. See this recent commencement address shared by Cabel Sasser on Mastodon.
0:00
/0:51
I welcome this change. Fear and anger have their place in emotional reactions, and just maybe, they’ll play a role in helping reset the AI hype. I remain convinced AI is just normal technology, and once the hype wears off, we can begin to more thoughtfully apply it where it’s best used, rather than cramming it into every area of society as if it were some kind of digital god.
There’s this idea in philosophy and the liberal arts that two opposing ideas can both be true. There’s many versions of this, but my personal favorite is F. Scott Fitzgerald’s version. In an essay titled “The Crack-Up,” Fitzgerald wrote:
The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.
I’ve been thinking about this a lot lately with generative AI and LLMs, especially at it relates to my profession of software engineering. I think both these things are true:
LLMs are a transformative technology for software engineers
LLMs are over hyped and over used in software engineering
It’s easy to get someone on your side for either of these statements. Such is life in our too-online and too-political culture these days. It’s rare—and in fact, I’m not sure I’ve found anyone yet to fully agree with me—that both of these statements are completely true.
The reason for this goes back to that Fitzgerald quote. It’s really difficult to continue to function when confronted with opposing ideas. We simplify to make things easier on us. I won’t be so bold as to claim a first-rate intelligence for myself, but I do like to live in that contradiction. I find it’s where the really interesting work is. This is why I continue to lean in with LLMs for coding when many of my open source colleagues push it away. It’s also why I continue to challenge my enterprise engineering colleagues who are happy to seed complete control to the LLM.
I’m trying to find the right balance between these two ideas for myself. I hope my industry can figure it out, too. We would end up in a better place if we could.
It’s been quiet around here lately. Even though I backed off my 1000 words per day goal, I didn’t intend for it to go so dark here. My apologies to you, dear reader. I hope to get back into a steady rhythm again in the coming days.
I’ve been writing a lot at work recently, both technical documents and business stuff. My appetite for writing after work has been low. That stuff seems to be slowing down now. At least I’m feeling like putting words together again in my free time.
I’ve also been working on a longer essay about shipping code in today’s AI-assisted coding landscape. It’s about the tension between wanting to treat the LLM as a compiler and still shipping code as the artifact of the machine’s output. I like where it’s headed, but it’s taking some time to finish.
All that to say, more is on the way here very soon.
In the hands of very skilled practitioners who pay a lot of attention, and treat the outputs with considerable scrutiny, coding agents can be astonishing. But that kind of expert knowledge, which coding agents can’t be counted on for, is exactly why we need to keep software engineers in the loop. And it is why most of us who are in the know have such a hard time taking Amodei’s hype about getting rid of software engineers (who look at the whole problem, and not just isolated bits of code) seriously.
There’s so much about this post that aligns with the things I’m encountering and thinking about recently. I appreciate the Gary put into words a lot of what I’m feeling. It’s this idea of “expert knowledge” that I keep returning to.
I’m not sure some people, some of who I know are heavily using generative AI and coding agents, fully grasp how unreliable LLMs are. I’ve heard people at work using them in domains they don’t fully understand. I’ve seen students using them for classes they don’t care to understand. There are countless YouTube videos hyping things people “with no knowledge” can do now. This is clearly a mistake. LLMs are best when paired with expert knowledge, where the expert verifies their output.
My younger daughter is in college now. She’s studying math, engineering, and computer science. I’ve given her one piece of advice, even as AI is advancing all around her. That advice: be the expert. Don’t cede control. Understand what you’re doing as you leverage LLMs, and you—and the machine—will be just fine.
I took a break this week from the slog my efforts to write 1000 words a day. I think this project might now be done for me. The project to write 1000 words every day, I mean. I’ll still be blogging here regularly, but I think I’ll slow down a little now. I’m glad I did it—I would consider the effort a success—and here’s why.
Writing a lot is the best way to find your voice. I was writing so infrequently, writing more software than prose, that I really did lose my voice a little. Writing 1000 words every day helped to find my voice and the topics I’m interested in writing about again. I like where this blog is positioned now, and I like the style of my writing for each post.
I’m also quicker at writing now. I can turn around a quick post linking to something I read in a few minutes. I’m more likely to reach for a post here on this site rather than posting on Mastodon or Bluesky. I like that. It feels better to me.
I got what I wanted out of it, this 1000 words per day. I’m ok to let it go now. That is the hallmark of a great personal project for me.
Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.
Patel’s use of Software Brain as a concept reminds me of discovering literary theory in college while working on my English degree. It gave me a language, a way to think about reading, which unlocked new ideas in my mind. Patel’s defining of this thing as “Software Brain,” a thing I previously understood intuitively, is effectively doing the same thing as literary theory once did. Now, something I once felt, I am able to more clearly understand and talk about.
Software Brain is exactly how tech people understand the world. It’s the source of that disconnect between tech people and regular people. This concept is perfectly named and perfectly described by Nilay Patel.
I lean toward the written word, but the video essay version is also quite good.
“Is the era of basically free or close-to-free AI kind of coming to an end here?” said Mark Riedl, a professor in the Georgia Tech School of Interactive Computing. “It’s too soon to say for certain, but there are some signs.”
I like that that this article also covers open source and open weight models too.
David DeSanto, CEO of software company Anaconda, recently returned from a five-week trip around the world speaking to customers. He said that many were moving to self-host AI models — deploying their own within Amazon Bedrock or Google’s Vertex AI to have more control over the supply chain — or changing to open-source or open-weight models for a lot of their needs, since many such models have significantly improved on benchmarks as of late.
I’ve been wondering what sort of squeeze open source would put on these companies. OpenAI and Anthropic are more and more reliant on enterprise businesses who have more easily been sold on FOMO. As costs rise, of course they’ll diversify. As frontier models stall, open source and open weight models will catch up. Simon Willison recently reported surprisingly good results running the latest Qwen open weight models. It’s only a matter of time, it seems, and then where will these companies’ business models be?
I have no idea if there’s a great financial and economic collapse coming with these companies. I get that the economics don’t seem sustainable, but I also know that there’s a bit of wish fulfillment going on with all these “there’s a bubble burst a coming” posts. This Verge article is not like that. It’s a pretty balanced piece, which I appreciate.
Colson Whitehead wrote a great piece for the NY Times. It’s labeled as an “Opinion Guest Essay,” which could be better said as just “Essay.” Here’s one bit I loved from Don’t Use A.I. to Do This:
Some people say, “I just use it to brainstorm ideas.” If you don’t know what to paint or compose or write, you’re in the wrong job. Art is the business of making up stuff — go make up some stuff. I asked the Gooch to scour the internet re growth industries, and it recommended telegraph operator and VCR repair. Maybe that’s more your speed.
Some people say, “I just use it for research. It only gets things wrong or hallucinates crazy stuff 30 percent of the time.” I don’t need a research assistant that gets things wrong 30 percent of the time. I can do that myself. Are they trying to replace me?
Oh, right — they are.
There’s no way any single excerpt can do the piece justice. Please—as Whitehead says there—do the work. Go read it for yourself.
I shared this on the socials yesterday with the comment: “I love Colson Whitehead so freaking much.” I really do. I mentioned Tim O’Brien and Flannery O’Connor as my top 2 authors in my post on America Fantastica, but if we expand to 3, there’s Colson Whitehead. I’ve probably actually read more of what he’s written than either O’Brien or O’Connor. Pick a book, any book, from him. They’re all gems and well worth the time you spend with them.