Doing or Approving

We assume change will happen to systems but their state is to stay still to never change.

We should not act surprised if they don’t change.

Right prototype for the right job

High-definition mockups build excitement and show stakeholders potential end results.

Functional prototypes let you iterate with real users and discover how they actually interact with your solution.

Both have their place. Choose based on what you need to learn or communicate.

Noise to Narrative

The way diffusion models are trained is that they tag an existing image with what’s on it. Then they add a small amount of noise to that picture in iterations, following a carefully designed schedule, and finally end up with a black and white mess. The model specifically learns to predict what the “less noisy” version of an image should look like at each step.

Then when you want to create an image you start from a noisy image and add your prompt (i.e tag) and then it starts removing noise and starts iteratively doing this until you see an image. The text prompt is converted into a numerical representation that guides the denoising process.

Think of this as when you try to learn something new. You have some hooks in your brain from before, you start to add context, new lessons, new information and suddenly you see the areas somewhat clearer.

Most LLMs (ChatGPT, Mistral, Claude, Gemini and so on) usually write like we humans do. We add one word after another.

Diffusion based text models have started to appear. Think of them as working in reverse. You might input the last words like the ending of a script and then from complete noise it starts to add in words that make sense.

The model keeps refining this noisy text adding more coherent words until in the end we have something somewhat unique. This approach creates text quite differently from traditional word by word generation.

What does this mean for texts?

Since it is not constrained of left-to-right thinking the output can be more creative and surprising. It can solve potential issues where sequential models struggle with maintaining coherence across long passages.

In this way it mimics how creatives work. Moving from rough draft to refinement to polished work.

A future existence of multiple models that we can use in different parts of the creative flow is exciting.

Urgent vs Important

There is a broad urgency focus when it comes to GenAI and how everyone can use it.

But remember that urgent is not the same as important.

Depending on who set the expectations urgent matters can become important to handle, not important to act on.

The AI revolution demands we distinguish between what requires attention and what deserves action.

Choose your priorities based on impact not hype.

Our inner oasis

I’ve got a mental model about learning. That our minds are like deserts where everything is buried under sand.

When we approach something and brush away the sand, making that area clearer.

If we don’t use it for a while, the sand slowly returns, somewhat covering but we can still see the shape.

From quills to prompts

When people first learned to read and write, they often wrote in fancy, complicated ways with plenty of extra words.

As more people became literate over time, writing styles changed.

People started to value writing that was clear and to the point instead.

Is it possible that we will face similar changes with Generative AI and LLMs?

At first people might show off by generating massive amounts.

Then we might shift to valuing brevity while maintaining the same meaning.

Eventually we could reach a point where writing adapts in ways that blur the line between human and AI-assisted text.

Taming AI-Generated Development

AKA How to tame Vibe Coding:

  • Create a branch before asking AI to write any code
  • Request only small specific changes from the AI
  • Commit those changes immediately when they work
  • Complete a “micro-sprint” then merge to main
  • Work small and fast with one-hour blocks not two-week marathons
  • Be prepared to start over by breaking tasks into even smaller pieces
  • Begin with manual implementations before attempting automation
  • Expect longer timelines for final production builds despite initial rapid progress

Emotional journey of change

Changes within companies follows a fairly predictable emotional journey.

It begins with “everything is fine” and despite declining results with hold on to familiar processes.

When change is bound to happen, resistance naturally follows as we push back end question “why is this even needed”.

As new processes take hold, confusion starts to emerge “this feels backwards, are we even moving faster”.

Then all the sudden we realize the breakthrough moment “aha! this actually works better!” as we see the benefits that made the discomfort worthwhile.

This kind of view of the phases can help leaders guide their teams through transitions.

The human perspective

As I pack up my bags to leave Copenhagen after spending some awesome days at #NAMS25, I’m reflecting on the conversations that dominated the event. Amid all the panels and presentations, one question kept surfacing: what happens when AI replaces us?

This isn’t speculation anymore as it’s become the underlying theme at almost every industry summit I’ve attended this year.

When we attend these events we seek human connection, perspectives and experiences. We want someone genuine being present and delivering insights that make our spider-senses go “what the heck!?”.

Nordic AI Media Summit 2025 delivered on this and went beyond.

But it got me thinking about other summits and presentations I’ve attended over the years.

Sometimes we feel immediately uncomfortable when someone is clearly just reading from a script. When the delivery sounds like a Dalek monotone or presentations lack visual pizzazz, we mentally check out and struggle to stay present even though the content is great.

We need a story or person we can connect with, whether positively or negatively, that demonstrates there’s an actual human sharing their personal perspective. It reminds me of that odd feeling when someone accepts an award via pre-recorded video. We feel cheated of the authentic experience we expected.

At NAMS25, these discussions about human authenticity intersected with questions about our future roles. If AI will replace much of what we do professionally, what remains uniquely human?

I believe this tension will reshape news media. AI will handle the facts, the police reports, the sports scores and the basic who-what-where of breaking news.

What remains uniquely human is our ability to provide context through lived experiences. AI can’t share what it felt like to be in the office the day after the last US election, how your commute to work was obstructed by someone removing a bridge.

Our future does not lie in competing with AI on data, but in the human connection, the unique local view of the community, what they have experienced and how to tell that story in the right context.

That perspective will be hard for algorithms to replicate.

Coding careers remain valuable

Touch any prototype and everyone instantly realizes how far their mental model was from reality.

That moment of clarity, when abstract ideas collide with concrete implementation, remains one of the most powerful feelings in product development.

As we are getting closer to mid-2025, this fundamental truth starts to show a subtle misinformation in “AI will replace coders”. The narrative that suggests that coding careers are somewhat fruitless has proven more persistent and I would say potentially harmful than feared waves of mis-information and deep fakes that dominated discussions in 2024.

This parallel struck me while listening to panel discussion at NAMS25 today.

While the AI coding tools have evolved, as in they excel at generating code, they have not started bridging the super critical gap between what users say they want and what they actually need.

When someone types “Build me an app that does THIS”, a somewhat vague request, it starts a journey towards a valuable product. And that journey always comes with human judgment at multiple decision points.

The most valuable skills still remain uniquely human: breaking apart problems, systems thinking and the ability to translate between technical possibilities and actual human needs.

This is what makes AI tools so powerful in the right hands. They amplify capabilities that already exist, they don’t replace them.

For those making career decisions: Don’t shy away from development because AI’s rise. Instead focus on developing skills AI can’t replicate. Learn the skills to collaborate effectively with the tools and (this is crucial) maintain the human abilities that give tech its purpose.

Touching that first prototype and having the ability to recognize that this is not working that is the core skill we developers need.