Longevity in code
Someday in a distant future people will complain that your code is the reason they are in this mess.
Someday in a distant future people will complain that your code is the reason they are in this mess.
Understanding someone else’s work is not the same as being able to do the work yourself. Watching is not practicing.
Becoming engineering manager and later CTO meant becoming a worse developer.
Not because I forgot how to code. Because I stopped feeling the friction.
I used to know when the build took too long. When a dependency broke randomly. When the deploy process made you hold your breath.
Now I have to ask people to describe pain I no longer experience.
You get promoted away from the thing that made you good at your job.
You lose the direct feedback loop. The thing that told you what actually mattered.
It’s a different kind of work. Sometimes I miss just feeling it myself.
January was supposed to be one month. A writing experiment. Post something every day and see what happens.
Then February came. Then March. Now it’s almost November.
I lowered the bar to make it sustainable. The streak itself became the higher bar. Never break the chain.
I’m not just protecting the chain. The chain protects me.
Every day I’m scanning for something worth capturing. A feeling. An observation. A pattern I haven’t noticed before.
The daily practice forces me to keep looking.
The commitment isn’t to writing anymore. It’s to staying receptive.
Fly from Stockholm to New York with a 1% leftward lean. No corrections. You’ll miss Manhattan by over 100 kilometers.
This is AI-assisted development without a clear destination.
Each generated function that’s “close enough” creates a small deviation. The AI suggests Axios when you use Fetch. It abstracts the user model differently than your auth system. It handles errors with try-catch instead of your Result types.
Twenty iterations later, your codebase has drifted into unfamiliar territory.
The drift is subtle. Each suggestion seems reasonable. The AI solves the immediate problem. The code works.
Working code aimed at the wrong target is still failure.
A flight plan isn’t just knowing you want to reach New York. It’s knowing your route, your waypoints, your altitude. In code, that means architecture decisions, patterns, constraints. All defined before you start.
Just like pilots check waypoints, you need architectural checkpoints. Regular moments to ask: Is this still the system I intended to build?
Moving faster in the wrong direction just means arriving at the wrong destination sooner.
Working with AI tools makes this obvious. The more input I give, the bigger the output gets.
But what I actually want is short and crisp. When I ask a peer for their opinion, I usually don’t want the five minute explanation. I want their 30 second take. If that resonates good or bad, we can go deeper.
The problem is AI can’t read signals until it’s done with its response.
When a person rambles, you can interrupt. “Wait, what did you mean by that?” They rewind to that specific moment. You probe where it matters.
AI doesn’t let you probe mid-stream. You either get everything dumped at once, or when you as for a concise version you get the compressed version with no way back to the reasoning.
That’s why I end up asking twice. But now if I want to understand why it suggested something, I have to dig through that first wall of text.
Conversations with humans lets you navigate. AI makes you choose between getting lost in details or flying blind without them.
Every other week I have a calendar invite to think about three months from now. Where do I need to be? What needs to be finished? How far along should this project be?
It’s easy to get deep into current events and daily fires. This biweekly pause forces me to step back and get a bird’s-eye view.
Today I ran a workshop and experienced one thing that people rarely talk about. You most often need to repeat yourself around three times when trying to achieve something.
Once to the full group. Once to each team in their own context or room. And often once more to clarify specific details.
This isn’t about bad documentation or confused participants. It’s about how facilitation actually works.
You explain the task. People nod. Then they start working and need to hear it again in their specific context. Then once more on a particular detail.
The documentation did its job. But walking between rooms and repeating myself is part of the process, not a sign something went wrong.
Generalists might be the winners in AI-assisted development.
AI writes perfect functions when you tell it exactly what to build. But someone needs to know the real problem isn’t the code. It’s the caching strategy, the mobile UX or how this connects to the payment system.
The specialist gets incredible productivity in their narrow slice. The generalist becomes the conductor.
The catch is too many generalists and nobody can decide what to actually build.
I’ve been recording myself talking instead of writing. Then letting AI clean it up afterward.
When I write, I’m constantly stopping. Should this be bold? Does this sentence need a comma? Is this the right word? I’m editing while generating.
When I talk, I can’t format while speaking. I can’t fix spelling. I just follow the thought.
If one idea splits into three, you just keep talking. You don’t feel locked into the structure you started with. Maybe this becomes one post or maybe it becomes three. You don’t have to decide upfront.
With writing you often feel committed to the outline you had in mind. You start with “this is my post about X” and that becomes a constraint. But when you’re talking and X reminds you of Y and Z, you just follow it. Later you can carve it up into separate pieces.
The generation mindset and the evaluation mindset compete for the same mental resources. When you’re filtering for quality while generating, neither process works well.
So I record. I ramble. I follow tangents. Then the AI translates it into something readable. It handles the “should this be bold” phase while I focus on getting ideas out.
The low bar approach captures half-formed thoughts and weird tangents that sometimes lead somewhere unexpected. You can raise the bar later when you switch modes.