The developer's path forward in the AI era

These last quarters I’ve seen one CEO, CTO or Founder after another saying something to the effect “AI will take your job, adapt or die”.

I used to have a great mp3 collection, you know, back in the days when it was almost considered legal.

It was an amazing feat to keep it up to date in Winamp and later iTunes with correct titles and album art. There came apps that focused solely on this catalogue part using online sources to update the metadata called ID3.

Then came Spotify and changed that completely. That time spent curating was just sunk cost and my well catalogued mp3s are probably on an old dying hard drive somewhere in the house.

This is the reality we must face when it comes to AI and as a developer I see this so vividly.

What used to be “LLMs cannot code” became “LLMs write bad code” and now “LLMs write ok code”.

So we shift focus and find a new weak spot: LLMs cannot fix bugs. Or I cannot understand the code it writes.

But then again, the code I struggle to understand, it can explain to me 99% of the time. The ability to both generate and explain complex code further diminishes our position as keepers of the technical knowledge.

We move the focus to what it cannot do while something else comes along that changes the playing field, a new Spotify emerges and changes it all.

LLMs with the context memory of your complete repository. With the knowledge of every commit and every connected ticket. That can understand the reason behind the change in an iterative fashion. You will not need to understand the code. It will need to understand the code.

You will need to understand what you want to achieve and why. This understanding of purpose becomes our core value proposition as technical professionals.

As AI Agents emerge and become reliable we will prompt it with “increase the convert rate by 10%, these are the amounts of visitors we have, make small changes until you reached your goal or you need my attention, you have one quarter”.

Why is this happening so quickly? There’s an economic incentive at play. Developer salaries, especially in Big Tech, have skyrocketed over the past decade. AI development may be running at a loss now, but it might be following the Starbucks playbook of crowd out competition first, then raise prices. Companies investing billions in AI aren’t doing it for novelty. They’re eyeing the massive labor costs they could eliminate. Right now AI is marketed as an assistant, but the financial motivation to replace rather than augment is powerful.

So where does this leave us as developers? We need to move up the value chain. If algorithms can handle implementation, we must master the intention behind it. This brings us back to first principles thinking.

These are the first principles that we aim at: Why do we build this service? What do the end users aim to solve?

This will be the hardest problem to understand and execute on. If my technical skills are not required as they used to be, what should I focus on instead?

History gives us a clue. In the past, key makers, cobblers and tailors used to be three separate jobs. As the market changed, their crafts were not needed to the same extent, but those who survived saw that with their dexterity they could adapt to new needs.

My view is that multiple technical crafts will join together, and our focus will shift toward the complete value chain rather than isolated implementation details. We’ll need to understand the whole picture to remain relevant.

Those first principles will not be: to write code.

AI images and brand awareness

How will we keep brand awareness when we use AI to generate our images?

I see the same style cartoons popping up across business social networks.

People generate fun images to pair with their messages. The visuals look cute but generic, completely out of brand and identical to everyone else’s. You might get some laughs. But nobody remembers which brand posted it.

Is that temporary engagement worth sacrificing your visual identity?

Anchoring bias

This happens when we become attached to the first thing we see. Then it becomes a struggle to think of other possibilities.

Do ship, don't polish

Have an instinct to ship. Not an instinct to polish.

I know it is scary, releasing a product in the wild.

Whatever product that is, piece of music, a service, an idea, make an effort to ship fast to a minimum viable audience.

Viable audience means someone who has a stake or is interested in your product, not your friends or your family.

Shipping means being open to fail. You cannot fail, or succeed without shipping. you just do.

I’m also scared about shipping. Probably or especially when you’ve just built something for yourself.

Doing or Approving

We assume change will happen to systems but their state is to stay still to never change.

We should not act surprised if they don’t change.

Right prototype for the right job

High-definition mockups build excitement and show stakeholders potential end results.

Functional prototypes let you iterate with real users and discover how they actually interact with your solution.

Both have their place. Choose based on what you need to learn or communicate.

Noise to Narrative

The way diffusion models are trained is that they tag an existing image with what’s on it. Then they add a small amount of noise to that picture in iterations, following a carefully designed schedule, and finally end up with a black and white mess. The model specifically learns to predict what the “less noisy” version of an image should look like at each step.

Then when you want to create an image you start from a noisy image and add your prompt (i.e tag) and then it starts removing noise and starts iteratively doing this until you see an image. The text prompt is converted into a numerical representation that guides the denoising process.

Think of this as when you try to learn something new. You have some hooks in your brain from before, you start to add context, new lessons, new information and suddenly you see the areas somewhat clearer.

Most LLMs (ChatGPT, Mistral, Claude, Gemini and so on) usually write like we humans do. We add one word after another.

Diffusion based text models have started to appear. Think of them as working in reverse. You might input the last words like the ending of a script and then from complete noise it starts to add in words that make sense.

The model keeps refining this noisy text adding more coherent words until in the end we have something somewhat unique. This approach creates text quite differently from traditional word by word generation.

What does this mean for texts?

Since it is not constrained of left-to-right thinking the output can be more creative and surprising. It can solve potential issues where sequential models struggle with maintaining coherence across long passages.

In this way it mimics how creatives work. Moving from rough draft to refinement to polished work.

A future existence of multiple models that we can use in different parts of the creative flow is exciting.

Urgent vs Important

There is a broad urgency focus when it comes to GenAI and how everyone can use it.

But remember that urgent is not the same as important.

Depending on who set the expectations urgent matters can become important to handle, not important to act on.

The AI revolution demands we distinguish between what requires attention and what deserves action.

Choose your priorities based on impact not hype.

Our inner oasis

I’ve got a mental model about learning. That our minds are like deserts where everything is buried under sand.

When we approach something and brush away the sand, making that area clearer.

If we don’t use it for a while, the sand slowly returns, somewhat covering but we can still see the shape.

From quills to prompts

When people first learned to read and write, they often wrote in fancy, complicated ways with plenty of extra words.

As more people became literate over time, writing styles changed.

People started to value writing that was clear and to the point instead.

Is it possible that we will face similar changes with Generative AI and LLMs?

At first people might show off by generating massive amounts.

Then we might shift to valuing brevity while maintaining the same meaning.

Eventually we could reach a point where writing adapts in ways that blur the line between human and AI-assisted text.