Anderycks.Net by Deryck Hodge logo

I'm Deryck Hodge.

Software developer working in news and media in Atlanta.

I write about the tech industry, building tools for writers and writing, and the stories we tell ourselves about technology.


Be the expert

Sign me up for more of what Gary Marcus is selling in Dario Amodei, hype, AI safety, and the explosion of vibe-coded AI disasters.

In the hands of very skilled practitioners who pay a lot of attention, and treat the outputs with considerable scrutiny, coding agents can be astonishing. But that kind of expert knowledge, which coding agents can’t be counted on for, is exactly why we need to keep software engineers in the loop. And it is why most of us who are in the know have such a hard time taking Amodei’s hype about getting rid of software engineers (who look at the whole problem, and not just isolated bits of code) seriously.

There’s so much about this post that aligns with the things I’m encountering and thinking about recently. I appreciate the Gary put into words a lot of what I’m feeling. It’s this idea of “expert knowledge” that I keep returning to.

I’m not sure some people, some of who I know are heavily using generative AI and coding agents, fully grasp how unreliable LLMs are. I’ve heard people at work using them in domains they don’t fully understand. I’ve seen students using them for classes they don’t care to understand. There are countless YouTube videos hyping things people “with no knowledge” can do now. This is clearly a mistake. LLMs are best when paired with expert knowledge, where the expert verifies their output.

My younger daughter is in college now. She’s studying math, engineering, and computer science. I’ve given her one piece of advice, even as AI is advancing all around her. That advice: be the expert. Don’t cede control. Understand what you’re doing as you leverage LLMs, and you—and the machine—will be just fine.

The Hallmark of a Great Personal Project

I took a break this week from the slog my efforts to write 1000 words a day. I think this project might now be done for me. The project to write 1000 words every day, I mean. I’ll still be blogging here regularly, but I think I’ll slow down a little now. I’m glad I did it—I would consider the effort a success—and here’s why.

Writing a lot is the best way to find your voice. I was writing so infrequently, writing more software than prose, that I really did lose my voice a little. Writing 1000 words every day helped to find my voice and the topics I’m interested in writing about again. I like where this blog is positioned now, and I like the style of my writing for each post.

I’m also quicker at writing now. I can turn around a quick post linking to something I read in a few minutes. I’m more likely to reach for a post here on this site rather than posting on Mastodon or Bluesky. I like that. It feels better to me.

I got what I wanted out of it, this 1000 words per day. I’m ok to let it go now. That is the hallmark of a great personal project for me.

Software Brain is the best way to describe the framework tech people use for understanding the world

I really didn’t want to link two Verge posts in one day, but man, Nilay Patel’s The People Do Not Yearn for Automation (via Daring Fireball) is a near perfect essay that deserves your attention.

Also, what a great title for an essay!

Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.

Patel’s use of Software Brain as a concept reminds me of discovering literary theory in college while working on my English degree. It gave me a language, a way to think about reading, which unlocked new ideas in my mind. Patel’s defining of this thing as “Software Brain,” a thing I previously understood intuitively, is effectively doing the same thing as literary theory once did. Now, something I once felt, I am able to more clearly understand and talk about.

Software Brain is exactly how tech people understand the world. It’s the source of that disconnect between tech people and regular people. This concept is perfectly named and perfectly described by Nilay Patel.

I lean toward the written word, but the video essay version is also quite good.

Is it about to cost more to use generative AI tools?

It’s about to cost more to use generative AI tools. How do we know this? Let the Verge in You’re about to feel the AI money squeeze count the ways for you.

“Is the era of basically free or close-to-free AI kind of coming to an end here?” said Mark Riedl, a professor in the Georgia Tech School of Interactive Computing. “It’s too soon to say for certain, but there are some signs.”

I like that that this article also covers open source and open weight models too.

David DeSanto, CEO of software company Anaconda, recently returned from a five-week trip around the world speaking to customers. He said that many were moving to self-host AI models — deploying their own within Amazon Bedrock or Google’s Vertex AI to have more control over the supply chain — or changing to open-source or open-weight models for a lot of their needs, since many such models have significantly improved on benchmarks as of late.

I’ve been wondering what sort of squeeze open source would put on these companies. OpenAI and Anthropic are more and more reliant on enterprise businesses who have more easily been sold on FOMO. As costs rise, of course they’ll diversify. As frontier models stall, open source and open weight models will catch up. Simon Willison recently reported surprisingly good results running the latest Qwen open weight models. It’s only a matter of time, it seems, and then where will these companies’ business models be?

I have no idea if there’s a great financial and economic collapse coming with these companies. I get that the economics don’t seem sustainable, but I also know that there’s a bit of wish fulfillment going on with all these “there’s a bubble burst a coming” posts. This Verge article is not like that. It’s a pretty balanced piece, which I appreciate.

Do the work

Colson Whitehead wrote a great piece for the NY Times. It’s labeled as an “Opinion Guest Essay,” which could be better said as just “Essay.” Here’s one bit I loved from Don’t Use A.I. to Do This:

Some people say, “I just use it to brainstorm ideas.” If you don’t know what to paint or compose or write, you’re in the wrong job. Art is the business of making up stuff — go make up some stuff. I asked the Gooch to scour the internet re growth industries, and it recommended telegraph operator and VCR repair. Maybe that’s more your speed.

Some people say, “I just use it for research. It only gets things wrong or hallucinates crazy stuff 30 percent of the time.” I don’t need a research assistant that gets things wrong 30 percent of the time. I can do that myself. Are they trying to replace me?

Oh, right — they are.

There’s no way any single excerpt can do the piece justice. Please—as Whitehead says there—do the work. Go read it for yourself.

I shared this on the socials yesterday with the comment: “I love Colson Whitehead so freaking much.” I really do. I mentioned Tim O’Brien and Flannery O’Connor as my top 2 authors in my post on America Fantastica, but if we expand to 3, there’s Colson Whitehead. I’ve probably actually read more of what he’s written than either O’Brien or O’Connor. Pick a book, any book, from him. They’re all gems and well worth the time you spend with them.

Being critical about the stories we tell ourselves, not so much about using generative AI

I’m sure my recent writings here might lead some to think I’m critical of using generative AI. That’s not really accurate. I see generative AI as normal technology. It has its uses, and we should understand what the best of those are. I am, however, very much in the camp of being critical of the stories we tell ourselves about generative AI.

Stories leave a lasting impression. I’m much more worried about bad stories than bad uses of AI.

That’s why you’ll see a lot of writing here that is critical of AI hype. Or writing that is critical of stories that cause fear and panic about AI. I’m critical of the extremes, those stories at either end of the booster or doomer spectrum.

AI is just normal technology. It has some good uses, and it has some bad ones. We should figure out where that line is, intellectually and rationally, not in a fictional world defined by the companies who most benefit from these fictional tales.

CNN.com article on Lina Khan

When most people think of CNN, I’m sure they think of cable news or 24/7 breaking news alerts. We’ve also got a really good news web site. I know because it’s a large part of what I work on each day. There’s some great journalism going on there. This morning I was reading Edward-Isaac Dovere’s piece Why Democrats with 2028 hopes are calling Lina Khan – and what she’s telling them about remaking the economy:

In 2023, the FTC challenged patents it said were improperly listed, pushing drug manufacturers to allow generic, cheaper versions of some asthma inhalers. And Khan dusted off a 1973 rule originally inspired by book-of-the-month club enforcement and tried to use it to require sellers to make it easier for people to click to cancel online subscriptions.

She went after Amazon for fees charged to businesses selling on the platform. She also moved to stop a $24.6 billion acquisition of the Albertsons grocery chain by Kroger on the grounds that it would raise costs and reduce consumer choice. The acquisition was eventually abandoned after a federal judge blocked the deal.

Today, Khan says the “affordability part of the conversation” inside the Democratic Party must be “paired with accountability.”

It’s a great piece on Khan’s influence on folks across the political spectrum. I knew she was both influential and polarizing, but it seems now, Democrats are more interested in embracing her ideas than ever before. That’s a new shift, and I wouldn’t have been aware without this reporting.

That Unrelenting Pace

I feel a little off today if I’m being honest. I want to write, but I don’t know if I have it in me to continue at this pace, this unrelenting pace. 1000 words a day is a lot of writing. It never lets up if you keep at it, and I want to keep at it. I must keep at it.

Writing, for me, is the thing that helps make sense of the world. The world, like those thousand words I’m trying to hit, is unrelenting. It never lets up with its rhythm or requirements, its expectations, its temptations, the need to be heard and seen and felt. The world wants to take. Writing wants to give.

It’s like that with technology, too. The pace. Everything is about pace in tech. They like the word productivity, but don’t be fooled, it’s unrelenting pace that they care about. Do more. Make more. Go faster. Win, before we lose, or run out of time, or die. Writing is how we really survive.

It’s counterintuitive, the way writing resists. Here I was just three paragraphs ago feeling the same pressure to perform, to deliver. But I sat down. I got still. I let the words appear, one after the other. A new rhythm develops, a softer more reflective one. It explains. It defines. It survives.

I feel a little better now.

Links to Interesting Things I Read Recently

Outlasting Technological Inertia

Puck has an article from inside the HumanX conference, what the article calls “the quintessential A.I. conference for operators.” In that piece, Silicon Valley’s Anthropic Anxiety (gift link), Ian Krietzberg writes:

Silicon Valley is famous for proclaiming that things will never be the same, as PagerDuty C.E.O. Jennifer Tejada reminded me. Likewise, she said, the industry narrative surrounding A.I. is often oversimplified. She has large enterprise customers, for instance, that are still in the midst of a long-awaited transition to digital—only “20 percent of the way through their cloud transformations.” And with A.I., she said, “there’s still a big question” around what the high-value use cases are. Meanwhile, widespread A.I. tool adoption requires enterprises to both understand the risks and be able to mitigate them. “These kinds of transformations take a long time,” she said. “They move as fast as society can move, as fast as humans can move.”

I was chatting about a similar idea with a colleague at work today. Technical people often make predictions based on technical requirements, not on business constraints or human inertia. The question really, as Krietzberg outlines well in his piece, is can Anthropic and OpenAI outlast that inertia?

Anderycks.Net by Deryck Hodge

Connect with me