From code owners to problem definers

The way we think about ownership in development might change.

Today, when you write code, other developers review it before it gets merged. They need to understand your logic, your variable names, your approach. Because they might need to work with this code later.

But when AI generates the code, what exactly are we reviewing?

Maybe ownership becomes about tests and requirements instead of implementations.

You own the test suite that defines what the system should do. You own the performance benchmarks. You own the specification that describes the problem we’re solving.

Someone else might run the same tests and get completely different generated code. That’s fine as long as it passes the same validation.

This changes everything. Instead of “Who wrote this function?” the question becomes “Who defined these requirements?” Instead of debugging someone’s implementation, you’re questioning whether the test suite covers the right scenarios.

The skill becomes problem definition rather than problem solving through code.

The holdout test

In machine learning, you hold back some data that the model never sees during training. This lets you check if the model actually learned the pattern or just memorized the examples.

We might need the same thing for AI-generated code.

Right now, most of our tests are “public”. The AI can see them, learn from them, and optimize for them. This works for basic functionality. But it creates a risk.

The AI might generate code that passes all your tests but doesn’t actually solve the problem. Like writing an if statement for every number between 1 and 2000 instead of using a proper algorithm.

The code technically works. It passes your tests. But it’s brittle and will break the moment you need to handle 2001.

So we need two types of tests. Public tests that guide the AI toward the right solution. And private tests that the AI never sees.

The private tests are your real validation. They test the business logic that actually matters. They try malformed inputs, edge cases, performance under load. They check whether the solution actually works, not just whether it responds correctly to known inputs.

This creates a new discipline. Someone needs to write these holdout tests. Someone who understands the domain deeply enough to know how the system might fail in the real world.

The AI helps with implementation. Humans focus on validation.

When code reviews become obsolete

We spend so much time in code reviews arguing about implementation details. Should this be a map or a loop? Why did you use this pattern instead of that one? Is this variable name clear enough?

These conversations made sense when humans were writing all the code. Understanding someone else’s implementation is crucial because you might need to maintain it, debug it, or extend it later.

But what happens when AI generates the code?

I can ask an AI agent to build an API that talks to a database, processes the data, and serves it to the frontend. The code works. The tests pass. But the implementation might be completely different each time I generate it.

Will this matter?

The conversation shifts. We’re not debating whether to use a for loop or a map. We’re debating whether we tested the edge case where users upload malformed data. Whether our performance benchmarks actually reflect real usage patterns.

Maybe developers will keep writing code that matters. But the code that controls what users see won’t be the implementation. It will be the tests.

The first five minutes

I printed 160 pages of Java documentation on my school printer in 1998. Sneakily. Downloaded the JDK over a 56k modem connection that took forever. Set everything up exactly as instructed.

Then I wrote my first line of code and hit run. Error message. Cryptic, broken English that meant nothing to me. So I uninstalled everything and downloaded it again, burning more precious bandwidth and time. Same error. Same confusion.

I gave up on Java that day.

Years later I tried PHP with the LAMP stack. Wrote some code, refreshed my browser, and it worked immediately.

When you’re new to something, you need feedback that tells you whether you’re moving in the right direction or completely lost. You need signals you can actually decode. Whether that feedback comes from the system responding to your code, a person explaining what went wrong, or even just seeing your idea work for the first time.

Without those clear signals early on, momentum dies before it ever builds.

Do the hard thing first

We avoid the hard thing.

We do the setup first. The research. The planning. The easy wins.

We tell ourselves we’re being strategic. Building momentum. Getting organized.

Really we’re just scared.

The hard thing is hard because we might fail at it. Because we don’t know how to do it yet. Because it requires us to learn something new or uncomfortable.

So we leave it for later. When we’re “ready.”

But later never comes. Or when it does, we’re already committed. The project has momentum. People are expecting results. Changing course feels impossible.

We’re pot committed to a solution that doesn’t solve the real problem.

Start with the thing we least want to do.

The uncomfortable conversation. The technical challenge you’ve never attempted. The skill you need to learn.

Do that first.

You might not be the audience

Sometimes people don’t want advice.

Sometimes people don’t want feedback.

Sometimes people don’t want to learn.

Sometimes people don’t want to get monetized.

And it’s okay. You might not be the audience.

The urge to help is natural. It’s also often misplaced.

This isn’t about them being closed-minded. It’s about you not being their audience.

The most generous thing you can do sometimes is witness someone’s experience without trying to change it.

You don’t have to be relevant to everyone.

Infinity is not in the quick wins

When you are aiming at being around much longer than Q4 next year, you should focus on the trends rather than quick wins.

Quick wins feel good in the moment but they rarely compound. Building something that lasts means accepting slower progress today for exponential returns tomorrow. The companies and people who outlast their competition aren’t chasing the next quarter’s numbers.

Tools shape how we aim to solve our problems

The tools we have shapes how we look at solving problems.

That’s why the same challenge gets completely different solutions depending on who’s tackling it.

Your voice is like a piano

I’ve spent countless hours watching Vinh Giang content.

One of his best insights is that you should use your voice like a piano.

That most people only use one key when they speak. Same tone. Same pitch. Same volume. It sounds monotone because it is monotone.

A piano has 88 keys for a reason. Learn to use them all.

Like a pianist knows exactly which key creates the feeling they want.

Your voice can do the same.

Willingness to belong

I used to hang out at this record store when I was younger. Buying cd after cd. Not because I needed them all but because each purchase felt like proof I belonged there. The synthesizer store was the same. I’d walk in and buy something just to have that moment of connection. To be part of the conversation. “What have you bought? Look at this awesome thing.”

We buy our way into communities. The transaction becomes the handshake. The receipt becomes the membership card.

Sometimes the willingness to belong costs more than we planned to spend. But maybe that’s the price of finding your people.