Taming AI-Generated Development

AKA How to tame Vibe Coding:

  • Create a branch before asking AI to write any code
  • Request only small specific changes from the AI
  • Commit those changes immediately when they work
  • Complete a “micro-sprint” then merge to main
  • Work small and fast with one-hour blocks not two-week marathons
  • Be prepared to start over by breaking tasks into even smaller pieces
  • Begin with manual implementations before attempting automation
  • Expect longer timelines for final production builds despite initial rapid progress

Emotional journey of change

Changes within companies follows a fairly predictable emotional journey.

It begins with “everything is fine” and despite declining results with hold on to familiar processes.

When change is bound to happen, resistance naturally follows as we push back end question “why is this even needed”.

As new processes take hold, confusion starts to emerge “this feels backwards, are we even moving faster”.

Then all the sudden we realize the breakthrough moment “aha! this actually works better!” as we see the benefits that made the discomfort worthwhile.

This kind of view of the phases can help leaders guide their teams through transitions.

The human perspective

As I pack up my bags to leave Copenhagen after spending some awesome days at #NAMS25, I’m reflecting on the conversations that dominated the event. Amid all the panels and presentations, one question kept surfacing: what happens when AI replaces us?

This isn’t speculation anymore as it’s become the underlying theme at almost every industry summit I’ve attended this year.

When we attend these events we seek human connection, perspectives and experiences. We want someone genuine being present and delivering insights that make our spider-senses go “what the heck!?”.

Nordic AI Media Summit 2025 delivered on this and went beyond.

But it got me thinking about other summits and presentations I’ve attended over the years.

Sometimes we feel immediately uncomfortable when someone is clearly just reading from a script. When the delivery sounds like a Dalek monotone or presentations lack visual pizzazz, we mentally check out and struggle to stay present even though the content is great.

We need a story or person we can connect with, whether positively or negatively, that demonstrates there’s an actual human sharing their personal perspective. It reminds me of that odd feeling when someone accepts an award via pre-recorded video. We feel cheated of the authentic experience we expected.

At NAMS25, these discussions about human authenticity intersected with questions about our future roles. If AI will replace much of what we do professionally, what remains uniquely human?

I believe this tension will reshape news media. AI will handle the facts, the police reports, the sports scores and the basic who-what-where of breaking news.

What remains uniquely human is our ability to provide context through lived experiences. AI can’t share what it felt like to be in the office the day after the last US election, how your commute to work was obstructed by someone removing a bridge.

Our future does not lie in competing with AI on data, but in the human connection, the unique local view of the community, what they have experienced and how to tell that story in the right context.

That perspective will be hard for algorithms to replicate.

Coding careers remain valuable

Touch any prototype and everyone instantly realizes how far their mental model was from reality.

That moment of clarity, when abstract ideas collide with concrete implementation, remains one of the most powerful feelings in product development.

As we are getting closer to mid-2025, this fundamental truth starts to show a subtle misinformation in “AI will replace coders”. The narrative that suggests that coding careers are somewhat fruitless has proven more persistent and I would say potentially harmful than feared waves of mis-information and deep fakes that dominated discussions in 2024.

This parallel struck me while listening to panel discussion at NAMS25 today.

While the AI coding tools have evolved, as in they excel at generating code, they have not started bridging the super critical gap between what users say they want and what they actually need.

When someone types “Build me an app that does THIS”, a somewhat vague request, it starts a journey towards a valuable product. And that journey always comes with human judgment at multiple decision points.

The most valuable skills still remain uniquely human: breaking apart problems, systems thinking and the ability to translate between technical possibilities and actual human needs.

This is what makes AI tools so powerful in the right hands. They amplify capabilities that already exist, they don’t replace them.

For those making career decisions: Don’t shy away from development because AI’s rise. Instead focus on developing skills AI can’t replicate. Learn the skills to collaborate effectively with the tools and (this is crucial) maintain the human abilities that give tech its purpose.

Touching that first prototype and having the ability to recognize that this is not working that is the core skill we developers need.

Engagement in the loop

As we might become mechanical turks just approving whatever AI outputs, will we be less and less interested in the output that we approve, and the tone of voice and standard it should uphold?

The danger isn’t just putting humans in the loop but keeping us engaged in that loop. Without genuine interest our oversight becomes hollow ritual.

Why tech debt - When boring code gets deprioritized

I remember that old sluggish legacy PHP code, still worked 99% of the time, but it was a code soup that no one wanted to deal with.

Every newcomer tried. I tried. Probably ones every year.

That piece of code became boring to work with. It was hard to add features too. It was easier to just add something around it, patch it.

Those pieces of code became boring and not prioritized.

It became tech debt out of being boring to work with, and every large scale system has that piece of code.

Followed up to why tech debt

Why tech debt - misunderstanding the priorities

The most insidious priority failure is treating tech debt as “optional work.” Companies create a false choice between “shipping features” and “fixing tech debt,” ignoring that they pay for debt daily through slower development and more bugs.

Follow up to why tech debt

Why tech debt - misunderstanding the problem

We over-engineer features users rarely touch, creating unnecessary complexity. This happens when product managers can’t clearly define needs or when engineers solve theoretical rather than actual problems.

Tech debt isn’t solely an engineering issue. When designers create solutions without technical context, product managers can’t articulate clear user needs, or executives push for features without validation, we build the wrong things well rather than the right things simply.

The most dangerous misunderstandings come from solving theoretical problems instead of actual ones. We build complex, flexible frameworks anticipating edge cases that never materialize, leaving behind over-engineered systems that must be maintained without delivering proportional value.

Follow up to why tech debt

Why tech debt

All tech debt stems from misunderstanding three things: the problem (what we’re solving), the priorities (what matters most), or the code (how it actually works).

It actually is that simple. We create tech debt when we work on things we think are interesting instead of what’s truly important (priorities), when we build before truly understanding the needs (problem), and when we work with old code without knowing why it works a certain way (code).