Magic of getting different answers

I’ve been fighting non-determinism my entire career as a developer. Race conditions. Timing issues. Environmental dependencies. “Works on my machine” has driven me borderline crazy, on the verge of throwing the computer out the window.

I write deterministic code that for some odd reason acts unpredictable anyway.

So when generative AI shows up giving a different essay every time you ask the same question, I’m not thrown by it. I already have the mental scaffolding for unpredictable systems.

Most people spent decades learning that computers do exactly what you tell them. Press save. It saves. Run the command again. Same result. Then AI arrives and suddenly feels like magic.

But turns out most developers aren’t fazed by this. We’re already trauma bonded with systems that should be predictable but aren’t.

Most people think: computer does what I tell it to do.

Developers think: computer usually does what I tell it to do, except when the database has opinions.

We’ve all filed bug reports that start with “sometimes” or “intermittently.” Trying to reproduce something that works fine on Tuesday but fails on Wednesday for reasons nobody can explain.

For once the non-determinism isn’t a bug to fix. It’s working as intended.

After spending years hunting down every source of randomness in our systems, we’re now building tools where randomness is the core feature.

So the thing everyone else finds magical is what we’ve been trying to eliminate for decades.

Users know problems

User research shouldn’t directly dictate what you build. It teaches you how customers think and what problems they face.

Research reveals the actual problem. Your job is crafting the right solution, which often looks completely different from what users initially described.

Three barriers to trying something again

There are two types of people.

One tries things once. If it sucks, they never try again.

The other keeps tinkering. They investigate, they experiment, they come back.

You see this everywhere but it’s especially visible with generative AI right now. Someone tried it two years ago, it gave mediocre answers, and they decided it wasn’t for them. Meanwhile the technology improved dramatically. The tinkerers kept going. They learned how to write better prompts, figured out which tasks actually worked, built intuition.

Now there’s this divide. The difference isn’t intelligence. It’s about clearing three specific barriers.

The first is capacity. Tinkering requires slack. If you’re already drowning, you can’t pause to learn how to swim better.

The second is mindset. People with a fixed mindset avoid situations where they might look stupid or feel incompetent. Playing with new tools when you don’t know what you’re doing feels vulnerable. Nothing works the way you expect. Growth mindset people are comfortable being temporarily bad at something. They expect the learning curve. The first prompt doesn’t work? Try again.

The third is curiosity. Some people are wired to investigate, others aren’t. This isn’t about being smarter or more ambitious. It’s temperament. Tinkerers see something broken or confusing and think “interesting.” Other people think “frustrating” and move on.

The people who cleared those three barriers two years ago now have a completely different relationship with these tools.

If you want to get there, start small. Find 20 minutes. Talk to your boss about creating space for experimentation. Ask your peers who use these tools how they actually use them. Have those conversations during lunch. The gap isn’t permanent. You just need to decide to cross it.

The goals are rarely the goal

A stated goal often serves as an organizing principle to get people moving in roughly the same direction rather than being the actual destination.

As soon as you arrive at that destination, a new goalpost appears.

How can we get this in the hands of users? How can we get them to use it more often? What don’t they like? What do they like?

9 Kilometers to Acid

This is the three hundred and third post.

Summer 1997. I’d been at Hultsfred, saw Fatboy Slim DJing, The Prodigy on stage, Daft Punk’s 1997 gig. That sound was inside me now. The 303 squelch. The acid bass. I needed to make it myself.

Around 2-3 in the morning, I remember seeing Minimalisterna playing some sort of app on a big computer. This interface I couldn’t quite make out. Didn’t think much of it then.

Then during summer, I’m flipping through Studio, a Swedish music magazine, and I turn a page. There it is. Rebirth.

A Roland TB-303 clone. But in the computer.

It looked amazing. Pure interface. Two TB-303 bass synths. The 808 and 909 drum machines. You had access to all four.

The price was slightly below 1,500 Swedish kronor. Around $150. I looked at it and thought this can’t be real.

I called the music shop downtown. They had it in store but they were closing in half an hour.

I grabbed my bike and pedaled almost 9 kilometers as fast as I could. Made it. Bought it. Took it home.

I installed it and just stared at the interface, then started turning knobs.

Two bass engines. Two drum machines. Pattern sequencers that let you build these elaborate loops. I hardly knew what I was doing but I could make acid. That sound from Hultsfred. That sound from “Everybody Needs a 303.” Simple. Pure. Fun.

The real hardware would have cost tens of thousands of kronor, but this was everything, synced up perfectly, running together in one program.

No manual needed. No synthesis theory. Just twist the cutoff and resonance and hear that squelch come alive.

Propellerhead made software feel like playing an instrument. Not like operating a tool.

That bike ride home with the diskette in my bag, that anticipation, that certainty something was about to change. That’s what good software should feel like.

Same skills, different rules

My first boss assumed I had everything memorized. He’d watch me code during the day and nod approvingly. What he didn’t know was that I wrote most of it at home the night before and brought it to the office on diskette. We had no internet at the office.

I wasn’t exactly honest about this arrangement.

But that’s the thing. Development isn’t about memorizing syntax. It’s about knowing what exists and how to adapt it. The real skill is pattern matching and finding solutions, not being a walking encyclopedia.

He was stuck in school mode. In school, you’re expected to know everything off the top of your head. In business, you should be valued for knowing how to find the right answer.

Same skills, different rules. Most people never make the switch.

(That job isn’t on my resume anymore.)

The vibe that led me astray

Fresh off optimizing our cache system with Claude Code, I spotted another performance issue. A series of .replace() calls using regular expressions that looked perfect for improvement. Same pattern as before: inefficient code, clear bottleneck, easy win.

I made assumptions about the text processing volume and didn’t dig deeper. Claude and I wrote tests. All green. Claude rewrote the code to be faster and less memory intensive.

I told the team we were good to deploy.

Immediate failures in production.

The Cycle

Add the failing case as a test. Iterate. Deploy. Fail again. Add another test. Iterate. Deploy. Fail again.

Each time the old code passed and the new code didn’t.

After the third round and probably eight hours of my team’s time wasted, something clicked.

The Realization

When AI rewrites code to be “better,” it optimizes for passing the tests you gave it. The test suite represents your understanding of the problem, not the problem itself.

The original developer had achieved 100% test coverage. This gave me confidence that all cases were handled. But 100% coverage means you’ve executed all code paths, not that you’ve tested all scenarios. Those green checkmarks just confirm every line ran during testing, not that every real-world case got validated.

The old regex code wasn’t elegant. But it was working. All those seemingly unnecessary parts were handling edge cases the test suite never captured.

What I Missed

The cache optimization worked because I understood the system deeply first. With the regex code, I saw green tests and assumed that I understood. I didn’t ask why the code was structured that way. I treated it like the cache problem because the surface looked similar.

My team trusted my judgment. I wasted their time because I confused a repeat pattern with repeat circumstances.

The Lesson

Before asking AI to improve code, understand why it works in its current form. Not just what it does, but what invisible constraints it’s solving for. Test coverage measures code execution, not correctness. And one successful optimization doesn’t guarantee the next will follow the same path.

The side quest effect

When I started learning how to code, the tutorials would lay out the main path. Do this exercise. Learn this concept. Move to the next lesson.

But I never really understood anything until I started asking what if. What if I changed this variable? What if I tried connecting these two things the tutorial kept separate?

Sometimes I’d create an endless loop and crash everything. Sometimes I’d build something surprisingly cool.

Either way, I learned more in those side quests than following the tutorial step by step.

Eventually I’d need to get back on track, so I’d save whatever I’d made and return to the official exercise.

But that code I stashed away, the stuff that wasn’t part of the lesson plan, that’s what actually taught me how things worked. The main quest gave me structure. The side quests gave understanding.​​​​​​​​​​​​​​​​

Finding the story

Yesterday I was stuck with the daily post. So I took a new approach. I asked my LLM to look at my last 10 posts and craft three wildly diverse questions to spark my next one. Not to write my next one, but to spark ideas.

It came up with three suggestions. Instead of suggesting topics or themes, it analyzed my writing patterns and asked questions that would make me think differently. Not “write about leadership” but “What invisible skill did you develop as a kid that still shapes how you solve problems as an adult?”

That question unlocked the first-grade prison game story. A moment I hadn’t thought about in years but that explains how I approach problems now.

Most AI writing tools try to replace the writer. This felt more like having a coach who asks good questions and gets you started. The process showed me something about creativity. Sometimes the block isn’t that you don’t have ideas. It’s that you don’t have the right question to unlock them.

The prison game

First grade. We’re debating how long someone should stay in “prison” during our recess game. Kids throwing out numbers, negotiating, slowly settling on something that sounds fair.

I’m sitting back, listening. After they land on their decision, I raise my hand.

“But if you get released and immediately get thrown back in jail, you’ll be in prison most of recess.”

The whole class goes quiet. But what I remember most vividly is my teacher’s response. She didn’t just acknowledge that I was right. She nodded with this look of approval that I had not experienced from an adult before.

That nod mattered more than the insight itself.

Decades later, an agile coach pulls me aside after a meeting. “I could see you spotted something but didn’t intervene. What happened?”

I’d learned to stay quiet by then. “I thought they might take offense.”

Instead of letting it slide, he taught me about questions. How to create space for people to discover rather than being told they’re wrong. How timing and framing matter as much as the observation itself.

Both saw something worth developing. They recognized that sitting back and seeing patterns isn’t just personality quirk. It’s useful. But only if you know how to surface what you’re seeing.

The teacher encouraged the instinct to speak up when something doesn’t add up. The coach taught me how to do it skillfully.

Most people miss this completely. They see someone quiet in meetings and assume disengagement. They mistake observation for passivity. But the right mentors recognize when someone’s processing differently and help them turn that into something valuable.

The world needs people who spot the problem behind the problem. But it also needs people who can recognize that pattern and teach others how to use it well.