Entering the Post-Link Era

We are entering an era where users no longer navigate the web through links, but through conversations with AI. These systems don’t just point to information, they break it down into bits and pieces that they can then join, personalize and deliver directly to you. Search results, news pages and social feeds are slowly being replaced with AI assistants that process information on our behalf.

Sure there’ll be links, but think of them more as page 2 of Google results. People that want answers want them quick and GenAI delivers right now.

How this will disrupt the ads business, a business where number of eyes is important we don’t know yet. But just think about the effect it has when an AI Assistant searches on your behalf and give you the results you need.

Algorithms are not social

Social media is no longer social, it is just media. We’ve forgotten to remove the “social” part when we talk about it.

We once came to these platforms to see what our friends were up to. Then we returned because we got what they predicted we should like. Now they’re no different than TV streaming or radio except for their more efficient dopamine-triggering mechanisms.

Algorithms are not social.

The evolution of AI coding agents

Throughout my development career I’ve probably been acting like a code chameleon. Join a company, look at PRs and code and try to mirror the style and ask the subtle question of “Which code and PRs should I mimic if I want my submissions to pass quickly?”

This adaptive behavior isn’t just about following rules. It’s about understanding the unwritten language of this new codebase. That teams develop patterns and practices that represent their knowledge.

Current AI coding tools miss this context. They generate functional but generic code drawn from open source repositories, that in isolation are awesome but in relation to existing internal codebases are off.

They write average code for specific environments that don’t want to be average.

Beyond Simple Generation

I believe we’re heading toward a more sophisticated approach with specialized AI agents handling different aspects of development.

Solution Agents that focuses purely on solving the problem, functionally without concern for style or conventions. Adaptation Agent that transform this solution to match the companies specific patterns and practices and possibly a Readability Agent that ensures that this code is comprehensible by humans.

First write the expectations, then generate the code, then create tests, then refine the code. Each step handled by agents optimized for that specific task.

This mirrors how human teams work today. Establish direction, implement solutions, and ensure quality and consistency. The difference is speed and scale.

The Future of Development

The most powerful development environments won’t be those that simply generate code fastest. They’ll be systems that understand your specific codebase deeply and adapt to your team’s unique approach.

When you begin a project, you’ll decide upfront what you want: TypeScript, specific linting rules, run on Cloudflare. But beyond these technical choices, your AI system will learn what makes your codebase unique.

Unlike the patterns found in open source repositories, your AI agents will understand the complete picture of your application. They’ll recognize that teams evolve their own concepts and styles that may not match external patterns.

This is the natural progression from todays word-by-word generation to truly contextual development assistance.

It’s not just about writing code, it is about writing code that belongs.

The developer's path forward in the AI era

These last quarters I’ve seen one CEO, CTO or Founder after another saying something to the effect “AI will take your job, adapt or die”.

I used to have a great mp3 collection, you know, back in the days when it was almost considered legal.

It was an amazing feat to keep it up to date in Winamp and later iTunes with correct titles and album art. There came apps that focused solely on this catalogue part using online sources to update the metadata called ID3.

Then came Spotify and changed that completely. That time spent curating was just sunk cost and my well catalogued mp3s are probably on an old dying hard drive somewhere in the house.

This is the reality we must face when it comes to AI and as a developer I see this so vividly.

What used to be “LLMs cannot code” became “LLMs write bad code” and now “LLMs write ok code”.

So we shift focus and find a new weak spot: LLMs cannot fix bugs. Or I cannot understand the code it writes.

But then again, the code I struggle to understand, it can explain to me 99% of the time. The ability to both generate and explain complex code further diminishes our position as keepers of the technical knowledge.

We move the focus to what it cannot do while something else comes along that changes the playing field, a new Spotify emerges and changes it all.

LLMs with the context memory of your complete repository. With the knowledge of every commit and every connected ticket. That can understand the reason behind the change in an iterative fashion. You will not need to understand the code. It will need to understand the code.

You will need to understand what you want to achieve and why. This understanding of purpose becomes our core value proposition as technical professionals.

As AI Agents emerge and become reliable we will prompt it with “increase the convert rate by 10%, these are the amounts of visitors we have, make small changes until you reached your goal or you need my attention, you have one quarter”.

Why is this happening so quickly? There’s an economic incentive at play. Developer salaries, especially in Big Tech, have skyrocketed over the past decade. AI development may be running at a loss now, but it might be following the Starbucks playbook of crowd out competition first, then raise prices. Companies investing billions in AI aren’t doing it for novelty. They’re eyeing the massive labor costs they could eliminate. Right now AI is marketed as an assistant, but the financial motivation to replace rather than augment is powerful.

So where does this leave us as developers? We need to move up the value chain. If algorithms can handle implementation, we must master the intention behind it. This brings us back to first principles thinking.

These are the first principles that we aim at: Why do we build this service? What do the end users aim to solve?

This will be the hardest problem to understand and execute on. If my technical skills are not required as they used to be, what should I focus on instead?

History gives us a clue. In the past, key makers, cobblers and tailors used to be three separate jobs. As the market changed, their crafts were not needed to the same extent, but those who survived saw that with their dexterity they could adapt to new needs.

My view is that multiple technical crafts will join together, and our focus will shift toward the complete value chain rather than isolated implementation details. We’ll need to understand the whole picture to remain relevant.

Those first principles will not be: to write code.

AI images and brand awareness

How will we keep brand awareness when we use AI to generate our images?

I see the same style cartoons popping up across business social networks.

People generate fun images to pair with their messages. The visuals look cute but generic, completely out of brand and identical to everyone else’s. You might get some laughs. But nobody remembers which brand posted it.

Is that temporary engagement worth sacrificing your visual identity?

Anchoring bias

This happens when we become attached to the first thing we see. Then it becomes a struggle to think of other possibilities.

Do ship, don't polish

Have an instinct to ship. Not an instinct to polish.

I know it is scary, releasing a product in the wild.

Whatever product that is, piece of music, a service, an idea, make an effort to ship fast to a minimum viable audience.

Viable audience means someone who has a stake or is interested in your product, not your friends or your family.

Shipping means being open to fail. You cannot fail, or succeed without shipping. you just do.

I’m also scared about shipping. Probably or especially when you’ve just built something for yourself.

Doing or Approving

We assume change will happen to systems but their state is to stay still to never change.

We should not act surprised if they don’t change.

Right prototype for the right job

High-definition mockups build excitement and show stakeholders potential end results.

Functional prototypes let you iterate with real users and discover how they actually interact with your solution.

Both have their place. Choose based on what you need to learn or communicate.

Noise to Narrative

The way diffusion models are trained is that they tag an existing image with what’s on it. Then they add a small amount of noise to that picture in iterations, following a carefully designed schedule, and finally end up with a black and white mess. The model specifically learns to predict what the “less noisy” version of an image should look like at each step.

Then when you want to create an image you start from a noisy image and add your prompt (i.e tag) and then it starts removing noise and starts iteratively doing this until you see an image. The text prompt is converted into a numerical representation that guides the denoising process.

Think of this as when you try to learn something new. You have some hooks in your brain from before, you start to add context, new lessons, new information and suddenly you see the areas somewhat clearer.

Most LLMs (ChatGPT, Mistral, Claude, Gemini and so on) usually write like we humans do. We add one word after another.

Diffusion based text models have started to appear. Think of them as working in reverse. You might input the last words like the ending of a script and then from complete noise it starts to add in words that make sense.

The model keeps refining this noisy text adding more coherent words until in the end we have something somewhat unique. This approach creates text quite differently from traditional word by word generation.

What does this mean for texts?

Since it is not constrained of left-to-right thinking the output can be more creative and surprising. It can solve potential issues where sequential models struggle with maintaining coherence across long passages.

In this way it mimics how creatives work. Moving from rough draft to refinement to polished work.

A future existence of multiple models that we can use in different parts of the creative flow is exciting.