AI is making me a better editor

Everyone worries AI will make us lazy writers.

The opposite is happening to me.

AI wants to write everything. Long paragraphs. Perfect transitions. Neat conclusions that wrap everything up with a bow.

I spend most of my time cutting. Removing words I don’t like. Cleaning up endings that over-explain the point.

The creative work has shifted. It’s not about generating anymore, it’s about recognizing what matters and ruthlessly eliminating what doesn’t.

I’m forced to practice saying “no” to perfectly decent sentences because they’re not essential.

I’m learning how to distill without losing the core.

How to be brief without being empty.

Small steps with AI

As a junior developer, I wanted to rewrite everything. Big commits, massive refactors, complete overhauls. It felt productive.

It wasn’t.

Senior developers know better. Small commits. Testable changes. Break it down further.

As a CTO, I coach the same thing. What’s the smallest thing we can release? Can we make a PoC first? How do we break this down?

Now I’m watching people learn this same lesson with AI.

AI wants to write your entire blog post, refactor your whole function, solve your complete problem. Just like junior me, it feels productive.

It isn’t.

The people who get the most out of AI are treating it like code. Small changes they can verify. Iterative improvements they can stand behind.

They’re learning in months what took me years: the more powerful your tool, the more restraint you need to use it well.

Turns out incremental thinking isn’t just good engineering. It’s good everything.

Make Creation Easier Than Consumption

Creation loses to consumption because we make it harder than it needs to be.

Want to start writing? We think about starting a blog, choosing a platform, designing a theme. Too much friction.

Want to learn to code? We plan out an entire app, research frameworks, set up development environments. Too many steps.

Want to draw? We think about technique, style, the right tools. Too much pressure.

Meanwhile, consumption is always one click away.

The path of least resistance wins every time.

So make creation the easier path: open a text file and write one paragraph. Write a function that adds two numbers. Draw something, anything, on paper.

Lower the barrier until creation becomes the default choice.

The Two-Hour Reality

Most productivity advice assumes you’re optimizing your entire day.

But that’s not reality.

Work takes 8 hours. Sleep takes 8 more. Cooking, family time, basic life maintenance fills most of the rest.

What you’re really optimizing for is maybe 2 hours of discretionary time.

Maybe an hour in the morning before things get hectic. Maybe an hour in the evening after everything settles down.

The consumption vs creation debate isn’t about revolutionizing your whole life. It’s about what you do with those precious few windows.

That’s when you’re actually choosing: do I scroll or do I write? Do I watch or do I build?

Forget the grand life overhaul. Just win those 2 hours.

The Finished Work Stumble

Getting back to creating after completing something has always been complicated for me.

It’s like the weight of the completed, iterated, polished version becomes the goal post for the next creative thing. The next thing needs to start on the same level as the previous was when it was completed.

This happens almost every time I try to create again. I overconsume on data, reports, articles and research to make a “better” next project.

But really, I’m just avoiding starting at the messy beginning level again.

The solution isn’t better research. It’s permission to suck again.

Your first draft doesn’t need to compete with your last published piece. It just needs to exist.

The Long-Term Trap

We’ve trained ourselves to think long-term about everything.

“But what if this needs to scale to millions of users?”

“How will this architecture handle enterprise requirements?”

“What happens when the team grows to 50 developers?”

All reasonable questions. For the wrong projects.

I’ve watched teams spend years discussing between Symfony vs Laravel and then the problem disappeared in two weeks when company moved to java as the golden path. They were so focused on building the “right” foundation that they missed the shifting ground beneath them.

When you’re building a prototype to test an idea, enterprise-grade infrastructure is like bringing a crane to hang a picture frame.

The bias toward long-term thinking creates an invisible tax on every decision. Database schemas become more complex than needed. Code gets abstracted for flexibility that never comes. Simple features become architectural discussions.

Meanwhile, competitors ship working versions and learn what actually matters.

The irony? By the time you’ve built something robust enough to “scale properly,” the market has often moved on. Your bulletproof solution solves yesterday’s problem with tomorrow’s complexity.

I’m not advocating for sloppy work. I’m questioning our default assumption that every project needs to survive nuclear war.

Sometimes the best long-term strategy is admitting you don’t know what the long term looks like.

Build for what you know today. Let tomorrow teach you what it actually needs.

The Five-Month Revolution

LinkedIn is buzzing about vibe coding’s impact on organizations.

“It’s changing everything!” say the enthusiasts.

“It’s destroying software quality!” counter the traditionalists.

Both camps are missing something fundamental: vibe coding has existed for five months.

Five. Months.

That’s not enough time to change a coffee order habit, let alone organizational foundations. Yet here we are, debating whether it’s revolutionizing or ruining enterprise development.

I’ve watched companies spend longer than five months just deciding which project management tool to use. The idea that a coding approach could fundamentally reshape organizational values in the same timeframe is… well… optimistic.

Real organizational change operates on geological timescales. Culture shifts happen when people retire, not when new tools emerge. The companies celebrating vibe coding victories today are probably the same ones who adopted React faster than everyone else because they were already comfortable with experimentation.

The companies dismissing it as dangerous chaos? They were already risk-averse.

Vibe coding didn’t change these organizations. It reveals them.

Maybe the question isn’t whether vibe coding will transform how we work. Maybe it’s whether we’re ready to see what our organizations actually value when a faster option appears.

Five months is just enough time to show what was already there.

The Figma Fallacy

The first time I held an iPod, I understood something that watching Steve Jobs demo never could have taught me.

Weight. Texture. The satisfying click of the scroll wheel.

You can’t experience lag in a prototype. You can’t feel the awkward pause between tapping a button and seeing a response. Screenshots are beautiful liars.

We’ve gotten so good at making static designs look finished that we’ve forgotten they’re just educated guesses.

A perfectly polished Figma prototype sends the wrong signal: “This is ready. Don’t change it.” The more professional it looks, the more people hesitate to suggest improvements. Nobody wants to be the person asking for “small tweaks” to something that looks complete.

But show someone a working prototype where they can type in a text field and watch their words appear instantly? Different story.

“Can this button be bigger?”

“What if the text was clearer?”

“This feels slow - can we make it faster?”

Suddenly everyone becomes a user experience expert. Not because they know more, but because they’re experiencing rather than imagining.

The best feedback comes from touching, not looking.

Paper sketches invite edits because they look unfinished. Working prototypes invite interaction because they feel real. Polished mockups invite approval because they seem done.

Choose your invitation wisely.

The book recommendations I need

Here is a service I desperately need. A book recommendation based on where I stopped reading a book.

”Ohh you stopped reading Lean Startup after 4 chapters. Then you probably should checkout The Every Store instead cause it has less jargon and is more narrative driven”.

Smart recommendations it could make:

“You stopped at the theory-heavy part, try this more practical version”

“Most people who stop here prefer memoirs over how-to books”

“You made it 60% through, here’s something that builds on those concepts”

The code that wouldn't die

I remember this piece of code.

Messy doesn’t even begin to describe it. At least once a year, someone would try to kill it. And fail.

On paper? Simple. It picked which market and language a user lived in. Geographic data plus browser settings, with some fallbacks thrown in. Easy, right?

Wrong.

The code was so woven into our system that touching it meant rewriting everything. We kept thinking “IT JUST PICKS MARKET AND LANGUAGE!!!1?” But that abstraction blinded us to the real complexity.

I see the same pattern now with AI-assisted coding.

You dip your toes in. You think everything’s simple. “Just build me an app that does X.” But you’re the complexity. You understand the full flow. You abstract away all the messy details that make starting from scratch so hard.

This is where Gall’s law hits you: “Complex systems that work have invariably evolved from simpler systems that worked.”

Start smaller. Start easier.

Don’t go all-in on “Create an app where everyone can chat with everyone in the world and it should be encrypted.”

Start with two people. Make that work first.