I’ve had this experience a lot lately at work—someone comes up to me, they’re smiling, they’ve got news to share. “Have you seen what these things can do?” they ask. I don’t want to be a jerk, but in my heart and head, I’m like, “Why yes, actually, I have.” They’ve just discovered what generative AI can do. Their minds are blown.
Anyone who’s done anything more than chat with a chat bot has had this experience. I made a comic book prototype with it. I’ve used it to code. I’ve run simulated fantasy football games with it. One of my daughters has tried letting it make “cringy fan fiction,” as she calls it. It’s all pretty magical. Until it isn’t.
Anyone who tells you about generative AI with that twinkle in their eye, like they’ve discovered the old magic, just hasn’t used it long enough yet. I want to say, “Just wait. You’ll find its limits, and then you’ll come back down to earth with the rest of us.”
I’m really curious to see what happens when the rest of the world reaches this point.
Who am I even writing this thing for? I have some ideas, but I feel the need to write them down to see what I really think. So here we go.
A General Audience Interested in Technology, Media, and Writing
My impulse is to say I’m writing for a general audience with an interest in technology, media, and writing. I think that’s mostly true, but practically, it’s not really how I’m writing things here, where “things” means blog posts. My short stories are definitely written with a general audience in mind.
I know I’m currently using too much industry jargon and assume too often that people reading my site are computer nerds like me or writers interested in writing about technology. If I’m going to welcome a more general audience to this site, I need to get better at tweaking my writing for that audience. I’ve got a longer piece I’m working on that I’m beginning to tweak with this audience in mind.
Let’s see how that piece turns out.
Documenting the Journey
I’m also writing this post because I want to do more thinking out loud with my audience here. I wrote previously about my goal of writing 1000 words a day. I also want to be transparent about this goal of building an audience and documenting my process. The best sites online let readers into the lives of the person behind the site. I’ve been blogging or maintaining my own personal site off and on for 20 years now. I’ve never really tried to build an audience or a relationship with that audience. I’d like to take a stab at that and see how it turns out.
Why?
Well… why not? But more seriously…
I found my way into web development quite by accident, thinking I would use it as a way of sharing my writing online. In the 90s, when I was in college and discovering the web, I was really into reading work posted online from professors I followed. These were people posting their stuff online—there was a term we used for it then before “digital humanities;” anyone remember that term—and I imagined a similar life and work for myself. Then, I got good at web development, it lead to a fulfilling career, and I never regularly put my writing online.
I’ve still been writing the whole time, just for myself. I’ve also worked for a one or two companies that had a focus on writing. And now, another one. It’s a recurring theme in my career!
A couple years ago, I got the itch to go to grad school and find a better way to merge these two passions. Grad school didn’t work out, but the desire to merge these passions never left. I’ve been trying to find a way back to those heady days of the early web for awhile now.
So that’s why.
Maybe I’m too idealistic. I keep thinking if I could really pair this web and writing thing, there could be a real audience for it. That’s why I’m writing so much now and putting so much effort into this site. I'm trying to take the long view this time. Keep writing, keep posting, and keep tweaking this site until it catches on with the audience it was always meant to find.
Simon Willison posted his thoughts and support for Anthropic restricting Claude Mythos to security researchers. I have immense respect for Simon, for his history in our industry, for helping create Django, and for the work he’s doing to truly understand large language models and their impact on coding, but I wish he would have take even the slightest bit of skepticism with Anthropic’s claims. He even dismisses the need for skepticism outright:
Saying “our model is too dangerous to release” is a great way to build buzz around a new model, but in this case I expect their caution is warranted.
The “our model is too dangerous” thing is not just a way to build buzz, it’s Anthropic’s entire raison d'être. Dario Amodei in particular spends an enormous amount of time out here on the interwebs talking about AI danger. Here’s a video where he basically says if we don’t stand in the gap to save humanity, no one will.
Everyone at OpenAI is so concerned, framed by their OpenAI themed glassware
When I was in my freshman-level college classes working on my liberal arts degree, one of the first things we were taught was to examine the source of a claim. If the person or organization behind a publication has an explicit bias or an outcome they desire, then you need to bring extra scrutiny to those claims. Anthropic and OpenAI not only want these doom-and-gloom scenarios to be true, they need them to be true. Otherwise, they’re just working on normal technology.
I updated my stories page with a new short story. This was a story I wrote in late 2024, which was the last year we did our family writing contest. I've written before about the family writing content.
Here's the opening from that story.
Winter.
The wall calendar was barely hanging on, but it was stuck there, on the fridge, dangling from an ALFA Insurance magnet attached at the corner. The magnet had the too-smiley face of their family insurance salesman on one side, and on the other side, the days of the month from a month that had long-since passed. Brandon Winter was angry his mom kept this mix of calendars on the fridge, past and present pulling at each other and yet somehow working together. He was staring at the date on the wall calendar. It was February 10, two months to the day from when he had been laid off from his job at World Wide Solutions.
I had never heard of Ben Lerner, which is unusual given his work ticks a lot of my interests. Literary fiction. Technology. The focus on contemporary culture. There’s so much here I think I would be interested in with him, so I’m excited to get out and get this book. I may have to make a run to a local bookstore tomorrow.
In the meantime, I think I’ll be checking out some of his recent short fiction at The New Yorker, like The Ferry, Café Loup, and The Media.
Every now and then a post comes along that says everything I would want to say about a subject. Michael Taggart’s I used AI. It worked. I hated it. is almost exactly that kind of post. I say almost, because clearly, I’m sitting here writing another post to add my own perspective. His post is good, though. So good! It’s well organized and covers so many points I would want to make. It will probably have people on both sides of the AI-for-coding debate complaining, which I take as a sign that the post is doing something right.
One of the things I like most is that the post has a very AI-works-but-it’s-not-perfect-yet view. So many people today are either boomers or doomers. They either write about AI coding as if they’ve discovered computers for the first time, or they act like it can’t do anything right. My experience is more like the one Taggart describes in his well documented post.
Well, the thing works. The code is in production today, serving certificates for TTI. The only direct changes I made to the codebase were for elegance. The core logic was solid from the jump, owing I believe as much to Rust's safeties in development as to the model's capabilities.
But then he notes:
Did the model hallucinate? Yes, albeit rarely and with self-correction. A handful of times it made up methods for a struct in one of the libraries or another. However, Rust's error messages from the LSP server and compilation checks coerced the model to recheck its work, leading to correct implementation. I did not intervene in this process. It took about five minutes per issue.
Hallucinations still happen for me, too. A lot actually, even though developers I work with say it never or rarely happens to them. I’m working in Python and JavaScript, so I do have to manually intervene, unlike Taggart. He makes a compelling case for using Rust if you’re going to be using coding agents. I just appreciate that he’s being fair here, calling out how cool and unique this is, while also being clear about its issues.
Regardless of how good (or not!) LLM-assisted coding is, I end up in a place pretty similar to Taggart.
If I could disinvent this technology, I would. My experiences, while enlightening as to models' capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don't think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain.
It’s this “cause more harm than good” where I think I sit these days. For me, the harm is in forcing devs down a productivity-at-all-costs path. It’s hustle culture at the expensive of building deep expertise. Practically, there’s a time and a place for moving fast and for being contemplative. AI hype has gotten so strong that you would think it’s only moving fast that matters.
I get it, though. I really do! The reason that contemplative, thoughtful coding is not seen as something to value is because most developers don’t work on things they care very much about. Devs want to get their code done and move on to the next thing. This is an indictment of our industry more than anything that speaks to the importance of LLM-assisted programming.
I end up in a pretty similar place to Taggart. Like him, I think coding assistants are here to stay. I don’t expect things to look that same a year or two from now. There will certainly be a correction. When there is, deep expertise about both writing and understanding software will be a valuable skill to have. If anyone asks me, I would say hold on to that, no matter how you feel about LLMs and agentic coding.
I'm at work today, and because of my job, there's CNN playing on every screen. I looked up to see this scene. My immediate thought was—did Trump think it was Easter today? So I snapped this pic and sent it to the Hodge Peeps group chat with this message:
He was dropping F-bombs on Easter Sunday, but then decided to show up with the Easter Bunny today. What even is going on?!
Trump would be a hard character to write in fiction. He just wouldn't be believable enough. I think that's still true, even after all these years of Trump. That's how surreal the current moment is.
Given I’ve refocused my spare time on this site, I should warn you all: I’m going to be making some updates around here. Be patient with me as things shift and change around on this site.
I plan to make some theme updates here first. I want a text-first, easy reading experience. The current theme is basically the style I want, but I also need to make a pass over it to polish things up a bit. The typography needs some love first and foremost.
Once that’s done, I want to work on the functionality. I’ll probably be disabling Ghost’s subscriber feature for a little while. I want a web-first approach for this site, and I’d like to have a reason to bring back the subscriber model when approrpriate. I’m thinking eventually I’ll bring this back as a membership program, with a set of perks for loyal readers. Before I do that, I need to build up that loyal audience.
For that reason, I am sending this post as an email to subscribers, too. If you’re getting this in email, it will probably will be the last email for a while. If and when I reopen the subscriber/membership part of this site, I’ll send another email then to share updates. For those of you here with me up to this point, thanks so much for reading and being part of my journey here on this site!
This is the first moon trip in 50 years, so it’s the first one where people have phones and modern DSL cameras to take pictures. These photos of the earth from the space craft are so cool.
This is my favorite one.
NASA astronaut Christina Koch peers out of the Orion spacecraft
Yesterday, I wrote about wanting to hit 1000 words a day in writing. I’m sure some might think, why would you want to do that? I even asked myself. It’s not like I don’t have plenty to do. I’ve got a good career and no shortage of work for my day job. I’m married, and though my daughters are grown now, we still have an active and busy family life. For me, the desire to write more comes down to a few things.
The first thing is that I love writing. Nothing satisifies like spending time crafting sentences that lead me to some new thought. It’s that idea made famous by several authors — Joan Didion and Flannery O’Connor come to mind — that a writer (well, anyone really!) doesn’t know what she thinks until she writes it down. I very much fall into that camp. Writing is thinking for me, which leads me to the second thing.
I want to slow down and spend more time thinking. Writing takes time. Thinking takes time. I’m just so over hustle culture. To combat that pressure to move faster, do more, to hustle, I want to intentionally slow down. Carving out time to write is the best way, and the most rewarding way, for me to do that.
Which leads me to the last thing.
I feel like I am uniquely situated for this present moment where science wants to overtake art, in that I am myself equal parts art and science. I love story, emotion, feeling, and art. I spent the first quarter of my life studying language, literature, and fiction writing. I have also spent years professionally building up skills in programming, math, and science. I reject the idea that we need more STEM and less art. We need both, and so, I want to explore the territory there in the middle. Hopefully, these 1000 words per day will lead to something worth saying in that space.