Three barriers to trying something again

There are two types of people.

One tries things once. If it sucks, they never try again.

The other keeps tinkering. They investigate, they experiment, they come back.

You see this everywhere but it’s especially visible with generative AI right now. Someone tried it two years ago, it gave mediocre answers, and they decided it wasn’t for them. Meanwhile the technology improved dramatically. The tinkerers kept going. They learned how to write better prompts, figured out which tasks actually worked, built intuition.

Now there’s this divide. The difference isn’t intelligence. It’s about clearing three specific barriers.

The first is capacity. Tinkering requires slack. If you’re already drowning, you can’t pause to learn how to swim better.

The second is mindset. People with a fixed mindset avoid situations where they might look stupid or feel incompetent. Playing with new tools when you don’t know what you’re doing feels vulnerable. Nothing works the way you expect. Growth mindset people are comfortable being temporarily bad at something. They expect the learning curve. The first prompt doesn’t work? Try again.

The third is curiosity. Some people are wired to investigate, others aren’t. This isn’t about being smarter or more ambitious. It’s temperament. Tinkerers see something broken or confusing and think “interesting.” Other people think “frustrating” and move on.

The people who cleared those three barriers two years ago now have a completely different relationship with these tools.

If you want to get there, start small. Find 20 minutes. Talk to your boss about creating space for experimentation. Ask your peers who use these tools how they actually use them. Have those conversations during lunch. The gap isn’t permanent. You just need to decide to cross it.

The goals are rarely the goal

A stated goal often serves as an organizing principle to get people moving in roughly the same direction rather than being the actual destination.

As soon as you arrive at that destination, a new goalpost appears.

How can we get this in the hands of users? How can we get them to use it more often? What don’t they like? What do they like?

9 Kilometers to Acid

This is the three hundred and third post.

Summer 1997. I’d been at Hultsfred, saw Fatboy Slim DJing, The Prodigy on stage, Daft Punk’s 1997 gig. That sound was inside me now. The 303 squelch. The acid bass. I needed to make it myself.

Around 2-3 in the morning, I remember seeing Minimalisterna playing some sort of app on a big computer. This interface I couldn’t quite make out. Didn’t think much of it then.

Then during summer, I’m flipping through Studio, a Swedish music magazine, and I turn a page. There it is. Rebirth.

A Roland TB-303 clone. But in the computer.

It looked amazing. Pure interface. Two TB-303 bass synths. The 808 and 909 drum machines. You had access to all four.

The price was slightly below 1,500 Swedish kronor. Around $150. I looked at it and thought this can’t be real.

I called the music shop downtown. They had it in store but they were closing in half an hour.

I grabbed my bike and pedaled almost 9 kilometers as fast as I could. Made it. Bought it. Took it home.

I installed it and just stared at the interface, then started turning knobs.

Two bass engines. Two drum machines. Pattern sequencers that let you build these elaborate loops. I hardly knew what I was doing but I could make acid. That sound from Hultsfred. That sound from “Everybody Needs a 303.” Simple. Pure. Fun.

The real hardware would have cost tens of thousands of kronor, but this was everything, synced up perfectly, running together in one program.

No manual needed. No synthesis theory. Just twist the cutoff and resonance and hear that squelch come alive.

Propellerhead made software feel like playing an instrument. Not like operating a tool.

That bike ride home with the diskette in my bag, that anticipation, that certainty something was about to change. That’s what good software should feel like.

Same skills, different rules

My first boss assumed I had everything memorized. He’d watch me code during the day and nod approvingly. What he didn’t know was that I wrote most of it at home the night before and brought it to the office on diskette. We had no internet at the office.

I wasn’t exactly honest about this arrangement.

But that’s the thing. Development isn’t about memorizing syntax. It’s about knowing what exists and how to adapt it. The real skill is pattern matching and finding solutions, not being a walking encyclopedia.

He was stuck in school mode. In school, you’re expected to know everything off the top of your head. In business, you should be valued for knowing how to find the right answer.

Same skills, different rules. Most people never make the switch.

(That job isn’t on my resume anymore.)

The vibe that led me astray

Fresh off optimizing our cache system with Claude Code, I spotted another performance issue. A series of .replace() calls using regular expressions that looked perfect for improvement. Same pattern as before: inefficient code, clear bottleneck, easy win.

I made assumptions about the text processing volume and didn’t dig deeper. Claude and I wrote tests. All green. Claude rewrote the code to be faster and less memory intensive.

I told the team we were good to deploy.

Immediate failures in production.

The Cycle

Add the failing case as a test. Iterate. Deploy. Fail again. Add another test. Iterate. Deploy. Fail again.

Each time the old code passed and the new code didn’t.

After the third round and probably eight hours of my team’s time wasted, something clicked.

The Realization

When AI rewrites code to be “better,” it optimizes for passing the tests you gave it. The test suite represents your understanding of the problem, not the problem itself.

The original developer had achieved 100% test coverage. This gave me confidence that all cases were handled. But 100% coverage means you’ve executed all code paths, not that you’ve tested all scenarios. Those green checkmarks just confirm every line ran during testing, not that every real-world case got validated.

The old regex code wasn’t elegant. But it was working. All those seemingly unnecessary parts were handling edge cases the test suite never captured.

What I Missed

The cache optimization worked because I understood the system deeply first. With the regex code, I saw green tests and assumed that I understood. I didn’t ask why the code was structured that way. I treated it like the cache problem because the surface looked similar.

My team trusted my judgment. I wasted their time because I confused a repeat pattern with repeat circumstances.

The Lesson

Before asking AI to improve code, understand why it works in its current form. Not just what it does, but what invisible constraints it’s solving for. Test coverage measures code execution, not correctness. And one successful optimization doesn’t guarantee the next will follow the same path.

The side quest effect

When I started learning how to code, the tutorials would lay out the main path. Do this exercise. Learn this concept. Move to the next lesson.

But I never really understood anything until I started asking what if. What if I changed this variable? What if I tried connecting these two things the tutorial kept separate?

Sometimes I’d create an endless loop and crash everything. Sometimes I’d build something surprisingly cool.

Either way, I learned more in those side quests than following the tutorial step by step.

Eventually I’d need to get back on track, so I’d save whatever I’d made and return to the official exercise.

But that code I stashed away, the stuff that wasn’t part of the lesson plan, that’s what actually taught me how things worked. The main quest gave me structure. The side quests gave understanding.​​​​​​​​​​​​​​​​

Finding the story

Yesterday I was stuck with the daily post. So I took a new approach. I asked my LLM to look at my last 10 posts and craft three wildly diverse questions to spark my next one. Not to write my next one, but to spark ideas.

It came up with three suggestions. Instead of suggesting topics or themes, it analyzed my writing patterns and asked questions that would make me think differently. Not “write about leadership” but “What invisible skill did you develop as a kid that still shapes how you solve problems as an adult?”

That question unlocked the first-grade prison game story. A moment I hadn’t thought about in years but that explains how I approach problems now.

Most AI writing tools try to replace the writer. This felt more like having a coach who asks good questions and gets you started. The process showed me something about creativity. Sometimes the block isn’t that you don’t have ideas. It’s that you don’t have the right question to unlock them.

The prison game

First grade. We’re debating how long someone should stay in “prison” during our recess game. Kids throwing out numbers, negotiating, slowly settling on something that sounds fair.

I’m sitting back, listening. After they land on their decision, I raise my hand.

“But if you get released and immediately get thrown back in jail, you’ll be in prison most of recess.”

The whole class goes quiet. But what I remember most vividly is my teacher’s response. She didn’t just acknowledge that I was right. She nodded with this look of approval that I had not experienced from an adult before.

That nod mattered more than the insight itself.

Decades later, an agile coach pulls me aside after a meeting. “I could see you spotted something but didn’t intervene. What happened?”

I’d learned to stay quiet by then. “I thought they might take offense.”

Instead of letting it slide, he taught me about questions. How to create space for people to discover rather than being told they’re wrong. How timing and framing matter as much as the observation itself.

Both saw something worth developing. They recognized that sitting back and seeing patterns isn’t just personality quirk. It’s useful. But only if you know how to surface what you’re seeing.

The teacher encouraged the instinct to speak up when something doesn’t add up. The coach taught me how to do it skillfully.

Most people miss this completely. They see someone quiet in meetings and assume disengagement. They mistake observation for passivity. But the right mentors recognize when someone’s processing differently and help them turn that into something valuable.

The world needs people who spot the problem behind the problem. But it also needs people who can recognize that pattern and teach others how to use it well.

When good ideas don't rise to the top

At an early startup in the late 2000s, we had this person who saw everything that needed fixing. Process gaps, product improvements, things no one else noticed. She’d make these detailed slides showing exactly what should change and why.

She wasn’t a product owner or VP. Just someone who cared deeply about making things better. When she showed me her work, I’d think “yeah, makes sense” and go back to executing. I wasn’t interested in strategy. I just wanted to ship.

Her ideas went nowhere.

I think about this from time to time. Should I have acted differently? Did the company really have that type of psychological safety (which wasn’t really a word yet)? I was part of the people that wrote the code that someone else told us to do.

As I grew to seniority I steadily understood that ideas and viewpoints were expected of me. Not just execution.

Startups sell this mythology that good ideas rise to the top. That flat structures mean everyone’s voice matters. But she was doing real strategic thinking that should have been someone’s full-time job, and it just evaporated because she didn’t have the title.

The waste wasn’t just her time. It was that the company actually needed what she was seeing. Someone should have been paying attention. Instead she kept making slides that disappeared into the void while people like me stayed focused on our own work.

The end of forms

A doctor sits across from a patient discussing symptoms. An AI listens. When the doctor turns to open the patient record, there’s no traditional interface waiting.

No tabs for “Current Medications” or “Previous Visits” or “Lab Results.” No forms to fill out. The screen shows exactly what this conversation needs. Lab results from last month because the patient mentioned ongoing fatigue. Previous notes about the shoulder injury because they just described similar pain.

The doctor says “show me the last few sessions as a summary.” The AI adds it to the view. Everything relevant, nothing extra.

During the appointment, the AI has been taking notes. Not transcribing everything, but capturing the medical decisions. When it’s time to document, the AI presents what it understood and asks about what it didn’t. “You mentioned adjusting the dosage but I didn’t catch the new amount.” “Should I schedule a follow-up for the chest X-ray results?”

The doctor confirms, corrects and adds context. Once. The same information doesn’t get entered in three different systems or copied across multiple forms.

The interface becomes a conversation about what happened rather than a data entry task. The AI handles the busywork of figuring out which fields need updating, which systems need notifications, which follow-ups need scheduling.

This isn’t about replacing doctors with AI. It’s about replacing forms with intelligence.

The building blocks for this future are already here. Anthropic is currently testing “Imagine with Claude,” where Claude generates software interfaces on the fly.

We’re moving toward a world where software doesn’t just respond to users. It imagines itself into existence around them.