The real-time PM

How could a potential future for a product manager working with a customizable B2B product look like? One that uses AI for benefit and for speed.

The PM sits in the customer meeting. Listens to the workflow they describe. Hears the specific pain points. Drafts the prompt for an MVP that solves it. Shows working solution before the meeting ends.

No more “let me check with engineering.” No more follow-up meetings. Just problem to solution in real-time.

Sales focuses on relationships and process. PMs focus on technical possibility. Engineers focus on platform, scale and well-defined documentation.

Start with bad

When we were small, we got praised for trying, exploring, being creative.

In school we learned there was a right answer and a wrong answer. Less room for creative responses.

Now in our jobs it’s perfect versus bad. We focus so much on perfect that we can’t get past the empty page.

Learn to be happy with bad. No one skipped straight from bad to good.​​​​​​​​​​​​​​​​

My brain has too many tabs open

I don’t know how many times I’ve walked past colleagues at different companies and seen their screens: twenty, thirty, sometimes forty browser tabs open. They’re frantically clicking through them trying to find the slides for the next meeting, the news article they wanted to read, the document they were working on this morning.

Then I see the meme: “My brain has too many tabs open.” And I realize we’re all doing the same thing with our thoughts.

We keep mental tabs open for everything: the idea we had in the shower, the thing we need to discuss with our manager, the grocery list, the project we should start next week.

This is exactly what David Allen was talking about in Getting Things Done. He called them “open loops” and your brain refuses to let go of them because it doesn’t trust that you’ll remember them otherwise.

Your colleague with forty browser tabs has the same problem you do with your racing thoughts. No trusted system for what matters and what doesn’t.

The solution isn’t better memory or more mental discipline. It’s building a system you trust enough that your brain will finally stop hoarding those tabs.

Close the loops.

From code owners to problem definers

The way we think about ownership in development might change.

Today, when you write code, other developers review it before it gets merged. They need to understand your logic, your variable names, your approach. Because they might need to work with this code later.

But when AI generates the code, what exactly are we reviewing?

Maybe ownership becomes about tests and requirements instead of implementations.

You own the test suite that defines what the system should do. You own the performance benchmarks. You own the specification that describes the problem we’re solving.

Someone else might run the same tests and get completely different generated code. That’s fine as long as it passes the same validation.

This changes everything. Instead of “Who wrote this function?” the question becomes “Who defined these requirements?” Instead of debugging someone’s implementation, you’re questioning whether the test suite covers the right scenarios.

The skill becomes problem definition rather than problem solving through code.

The holdout test

In machine learning, you hold back some data that the model never sees during training. This lets you check if the model actually learned the pattern or just memorized the examples.

We might need the same thing for AI-generated code.

Right now, most of our tests are “public”. The AI can see them, learn from them, and optimize for them. This works for basic functionality. But it creates a risk.

The AI might generate code that passes all your tests but doesn’t actually solve the problem. Like writing an if statement for every number between 1 and 2000 instead of using a proper algorithm.

The code technically works. It passes your tests. But it’s brittle and will break the moment you need to handle 2001.

So we need two types of tests. Public tests that guide the AI toward the right solution. And private tests that the AI never sees.

The private tests are your real validation. They test the business logic that actually matters. They try malformed inputs, edge cases, performance under load. They check whether the solution actually works, not just whether it responds correctly to known inputs.

This creates a new discipline. Someone needs to write these holdout tests. Someone who understands the domain deeply enough to know how the system might fail in the real world.

The AI helps with implementation. Humans focus on validation.

When code reviews become obsolete

We spend so much time in code reviews arguing about implementation details. Should this be a map or a loop? Why did you use this pattern instead of that one? Is this variable name clear enough?

These conversations made sense when humans were writing all the code. Understanding someone else’s implementation is crucial because you might need to maintain it, debug it, or extend it later.

But what happens when AI generates the code?

I can ask an AI agent to build an API that talks to a database, processes the data, and serves it to the frontend. The code works. The tests pass. But the implementation might be completely different each time I generate it.

Will this matter?

The conversation shifts. We’re not debating whether to use a for loop or a map. We’re debating whether we tested the edge case where users upload malformed data. Whether our performance benchmarks actually reflect real usage patterns.

Maybe developers will keep writing code that matters. But the code that controls what users see won’t be the implementation. It will be the tests.

The first five minutes

I printed 160 pages of Java documentation on my school printer in 1998. Sneakily. Downloaded the JDK over a 56k modem connection that took forever. Set everything up exactly as instructed.

Then I wrote my first line of code and hit run. Error message. Cryptic, broken English that meant nothing to me. So I uninstalled everything and downloaded it again, burning more precious bandwidth and time. Same error. Same confusion.

I gave up on Java that day.

Years later I tried PHP with the LAMP stack. Wrote some code, refreshed my browser, and it worked immediately.

When you’re new to something, you need feedback that tells you whether you’re moving in the right direction or completely lost. You need signals you can actually decode. Whether that feedback comes from the system responding to your code, a person explaining what went wrong, or even just seeing your idea work for the first time.

Without those clear signals early on, momentum dies before it ever builds.

Do the hard thing first

We avoid the hard thing.

We do the setup first. The research. The planning. The easy wins.

We tell ourselves we’re being strategic. Building momentum. Getting organized.

Really we’re just scared.

The hard thing is hard because we might fail at it. Because we don’t know how to do it yet. Because it requires us to learn something new or uncomfortable.

So we leave it for later. When we’re “ready.”

But later never comes. Or when it does, we’re already committed. The project has momentum. People are expecting results. Changing course feels impossible.

We’re pot committed to a solution that doesn’t solve the real problem.

Start with the thing we least want to do.

The uncomfortable conversation. The technical challenge you’ve never attempted. The skill you need to learn.

Do that first.

You might not be the audience

Sometimes people don’t want advice.

Sometimes people don’t want feedback.

Sometimes people don’t want to learn.

Sometimes people don’t want to get monetized.

And it’s okay. You might not be the audience.

The urge to help is natural. It’s also often misplaced.

This isn’t about them being closed-minded. It’s about you not being their audience.

The most generous thing you can do sometimes is witness someone’s experience without trying to change it.

You don’t have to be relevant to everyone.

Infinity is not in the quick wins

When you are aiming at being around much longer than Q4 next year, you should focus on the trends rather than quick wins.

Quick wins feel good in the moment but they rarely compound. Building something that lasts means accepting slower progress today for exponential returns tomorrow. The companies and people who outlast their competition aren’t chasing the next quarter’s numbers.