When the map becomes more dangerous than the territory

The rock felt loose under my fingers. With my daughter strapped to my back, every step along this Goa cliff face felt increasingly wrong. But the travel book had been so confident about this route. According to the guide, we should follow this exact path to reach an incredible viewpoint with excellent food options on the other side.

We had trusted that book completely. It was published by a major travel company, filled with detailed descriptions and clear directions. When we reached Baga, we followed its instructions precisely: cross to the opposite side of the small river, then follow the cliffside path upward. The book knew what it was talking about, right?

Except with each step, reality told a different story. The path felt unstable. The handholds seemed unreliable. The route that looked manageable on paper felt treacherous in person. Still, we kept going. The book said this was the way.

Until I reached that moment. One more step forward, and I genuinely believed my daughter and I might slip and fall into the water below. We turned around and retraced our steps to safety.

That book probably contained accurate information when it was written. But trails erode. Weather changes landscapes. Development alters access points. Rivers carve new paths. The book couldn’t update itself to reflect these realities, yet there we were, following instructions that no longer matched the terrain.

This happens everywhere. Medical practices persist decades after research proves them ineffective. Business strategies continue long after market conditions make them obsolete. Academic theories get repeated in textbooks years after new evidence contradicts them. We trust the source more than we trust our own direct experience of current conditions.

Every step felt wrong, but we kept going because the book seemed so certain. Physical danger made the stakes obvious, but usually the costs of following outdated information are subtler and take longer to reveal themselves.

Organizations cling to strategies that worked in previous decades. Investors follow conventional wisdom that no longer applies. People make career decisions based on advice from different economic realities. The guidance retains its authority even when circumstances have fundamentally changed.

Sometimes the smartest thing you can do is turn around before the map leads you over a cliff.

The death of tech theater

Watching yesterday’s Apple presentation reminded me what we’ve lost since the shift to presentations without live audiences.

For years now, tech companies have perfected the art of polished, pre-recorded presentations. Even when they return to live events, they’re chasing the wrong thing.

Steve Jobs knew something that today’s tech companies have forgotten. People don’t buy products. They buy feelings.

Remember when he pulled the MacBook Air out of a envelope? Pure theater. The setup, the anticipation, the reveal. He made thinness visible in a way no spec sheet ever could.

Compare that to today’s presentations. Beautiful production design, brilliant technology, but missing the spark.

Jobs presented to regular people who might fall in love with a product. Today’s presentations are aimed at developers, investors, and tech journalists.

Just watch how many times they mention specifications versus how many times they talk about what it feels like to use the product.

Jobs wasn’t just a great presenter. He was the last great tech evangelist who understood that selling technology means selling dreams.

Everyone since has been trying to sell features.

Losing my edge

There’s an LCD Soundsystem song called “Losing My Edge” about getting older and watching the next generation discover what you thought was yours.

My son is bouncing off the walls about tonight’s Apple event.

I remember being that person. Sixteen, seventeen years ago, clearing my calendar for every Steve Jobs keynote. Not just for the products, but for the theater. The way he’d pause before revealing something. How he sold feelings instead of specifications.

“Today, Apple is going to reinvent the phone.”

The confidence. The showmanship. The genuine surprise.

I watch my son look at youtube about the new rumoured Apple devices and feel something I can’t quite name. Pride, maybe. Or recognition. He’s discovering his own version of that electric feeling I used to chase.

But I’m not bitter about losing it.

You don’t stay a youth forever. You develop immunity to hype cycles. You’ve seen enough “revolutionary” products to know the difference between marketing and genuine breakthrough.

The edge I’m losing isn’t about being behind the curve.

It’s about having lived through enough cycles to see the patterns. To appreciate the craft without falling for the spell.

Maybe that’s not a loss at all.

The real-time PM

How could a potential future for a product manager working with a customizable B2B product look like? One that uses AI for benefit and for speed.

The PM sits in the customer meeting. Listens to the workflow they describe. Hears the specific pain points. Drafts the prompt for an MVP that solves it. Shows working solution before the meeting ends.

No more “let me check with engineering.” No more follow-up meetings. Just problem to solution in real-time.

Sales focuses on relationships and process. PMs focus on technical possibility. Engineers focus on platform, scale and well-defined documentation.

Start with bad

When we were small, we got praised for trying, exploring, being creative.

In school we learned there was a right answer and a wrong answer. Less room for creative responses.

Now in our jobs it’s perfect versus bad. We focus so much on perfect that we can’t get past the empty page.

Learn to be happy with bad. No one skipped straight from bad to good.​​​​​​​​​​​​​​​​

My brain has too many tabs open

I don’t know how many times I’ve walked past colleagues at different companies and seen their screens: twenty, thirty, sometimes forty browser tabs open. They’re frantically clicking through them trying to find the slides for the next meeting, the news article they wanted to read, the document they were working on this morning.

Then I see the meme: “My brain has too many tabs open.” And I realize we’re all doing the same thing with our thoughts.

We keep mental tabs open for everything: the idea we had in the shower, the thing we need to discuss with our manager, the grocery list, the project we should start next week.

This is exactly what David Allen was talking about in Getting Things Done. He called them “open loops” and your brain refuses to let go of them because it doesn’t trust that you’ll remember them otherwise.

Your colleague with forty browser tabs has the same problem you do with your racing thoughts. No trusted system for what matters and what doesn’t.

The solution isn’t better memory or more mental discipline. It’s building a system you trust enough that your brain will finally stop hoarding those tabs.

Close the loops.

From code owners to problem definers

The way we think about ownership in development might change.

Today, when you write code, other developers review it before it gets merged. They need to understand your logic, your variable names, your approach. Because they might need to work with this code later.

But when AI generates the code, what exactly are we reviewing?

Maybe ownership becomes about tests and requirements instead of implementations.

You own the test suite that defines what the system should do. You own the performance benchmarks. You own the specification that describes the problem we’re solving.

Someone else might run the same tests and get completely different generated code. That’s fine as long as it passes the same validation.

This changes everything. Instead of “Who wrote this function?” the question becomes “Who defined these requirements?” Instead of debugging someone’s implementation, you’re questioning whether the test suite covers the right scenarios.

The skill becomes problem definition rather than problem solving through code.

The holdout test

In machine learning, you hold back some data that the model never sees during training. This lets you check if the model actually learned the pattern or just memorized the examples.

We might need the same thing for AI-generated code.

Right now, most of our tests are “public”. The AI can see them, learn from them, and optimize for them. This works for basic functionality. But it creates a risk.

The AI might generate code that passes all your tests but doesn’t actually solve the problem. Like writing an if statement for every number between 1 and 2000 instead of using a proper algorithm.

The code technically works. It passes your tests. But it’s brittle and will break the moment you need to handle 2001.

So we need two types of tests. Public tests that guide the AI toward the right solution. And private tests that the AI never sees.

The private tests are your real validation. They test the business logic that actually matters. They try malformed inputs, edge cases, performance under load. They check whether the solution actually works, not just whether it responds correctly to known inputs.

This creates a new discipline. Someone needs to write these holdout tests. Someone who understands the domain deeply enough to know how the system might fail in the real world.

The AI helps with implementation. Humans focus on validation.

When code reviews become obsolete

We spend so much time in code reviews arguing about implementation details. Should this be a map or a loop? Why did you use this pattern instead of that one? Is this variable name clear enough?

These conversations made sense when humans were writing all the code. Understanding someone else’s implementation is crucial because you might need to maintain it, debug it, or extend it later.

But what happens when AI generates the code?

I can ask an AI agent to build an API that talks to a database, processes the data, and serves it to the frontend. The code works. The tests pass. But the implementation might be completely different each time I generate it.

Will this matter?

The conversation shifts. We’re not debating whether to use a for loop or a map. We’re debating whether we tested the edge case where users upload malformed data. Whether our performance benchmarks actually reflect real usage patterns.

Maybe developers will keep writing code that matters. But the code that controls what users see won’t be the implementation. It will be the tests.

The first five minutes

I printed 160 pages of Java documentation on my school printer in 1998. Sneakily. Downloaded the JDK over a 56k modem connection that took forever. Set everything up exactly as instructed.

Then I wrote my first line of code and hit run. Error message. Cryptic, broken English that meant nothing to me. So I uninstalled everything and downloaded it again, burning more precious bandwidth and time. Same error. Same confusion.

I gave up on Java that day.

Years later I tried PHP with the LAMP stack. Wrote some code, refreshed my browser, and it worked immediately.

When you’re new to something, you need feedback that tells you whether you’re moving in the right direction or completely lost. You need signals you can actually decode. Whether that feedback comes from the system responding to your code, a person explaining what went wrong, or even just seeing your idea work for the first time.

Without those clear signals early on, momentum dies before it ever builds.