9 Kilometers to Acid

This is the three hundred and third post.

Summer 1997. I’d been at Hultsfred, saw Fatboy Slim DJing, The Prodigy on stage, Daft Punk’s 1997 gig. That sound was inside me now. The 303 squelch. The acid bass. I needed to make it myself.

Around 2-3 in the morning, I remember seeing Minimalisterna playing some sort of app on a big computer. This interface I couldn’t quite make out. Didn’t think much of it then.

Then during summer, I’m flipping through Studio, a Swedish music magazine, and I turn a page. There it is. Rebirth.

A Roland TB-303 clone. But in the computer.

It looked amazing. Pure interface. Two TB-303 bass synths. The 808 and 909 drum machines. You had access to all four.

The price was slightly below 1,500 Swedish kronor. Around $150. I looked at it and thought this can’t be real.

I called the music shop downtown. They had it in store but they were closing in half an hour.

I grabbed my bike and pedaled almost 9 kilometers as fast as I could. Made it. Bought it. Took it home.

I installed it and just stared at the interface, then started turning knobs.

Two bass engines. Two drum machines. Pattern sequencers that let you build these elaborate loops. I hardly knew what I was doing but I could make acid. That sound from Hultsfred. That sound from “Everybody Needs a 303.” Simple. Pure. Fun.

The real hardware would have cost tens of thousands of kronor, but this was everything, synced up perfectly, running together in one program.

No manual needed. No synthesis theory. Just twist the cutoff and resonance and hear that squelch come alive.

Propellerhead made software feel like playing an instrument. Not like operating a tool.

That bike ride home with the diskette in my bag, that anticipation, that certainty something was about to change. That’s what good software should feel like.

Same skills, different rules

My first boss assumed I had everything memorized. He’d watch me code during the day and nod approvingly. What he didn’t know was that I wrote most of it at home the night before and brought it to the office on diskette. We had no internet at the office.

I wasn’t exactly honest about this arrangement.

But that’s the thing. Development isn’t about memorizing syntax. It’s about knowing what exists and how to adapt it. The real skill is pattern matching and finding solutions, not being a walking encyclopedia.

He was stuck in school mode. In school, you’re expected to know everything off the top of your head. In business, you should be valued for knowing how to find the right answer.

Same skills, different rules. Most people never make the switch.

(That job isn’t on my resume anymore.)

The vibe that led me astray

Fresh off optimizing our cache system with Claude Code, I spotted another performance issue. A series of .replace() calls using regular expressions that looked perfect for improvement. Same pattern as before: inefficient code, clear bottleneck, easy win.

I made assumptions about the text processing volume and didn’t dig deeper. Claude and I wrote tests. All green. Claude rewrote the code to be faster and less memory intensive.

I told the team we were good to deploy.

Immediate failures in production.

The Cycle

Add the failing case as a test. Iterate. Deploy. Fail again. Add another test. Iterate. Deploy. Fail again.

Each time the old code passed and the new code didn’t.

After the third round and probably eight hours of my team’s time wasted, something clicked.

The Realization

When AI rewrites code to be “better,” it optimizes for passing the tests you gave it. The test suite represents your understanding of the problem, not the problem itself.

The original developer had achieved 100% test coverage. This gave me confidence that all cases were handled. But 100% coverage means you’ve executed all code paths, not that you’ve tested all scenarios. Those green checkmarks just confirm every line ran during testing, not that every real-world case got validated.

The old regex code wasn’t elegant. But it was working. All those seemingly unnecessary parts were handling edge cases the test suite never captured.

What I Missed

The cache optimization worked because I understood the system deeply first. With the regex code, I saw green tests and assumed that I understood. I didn’t ask why the code was structured that way. I treated it like the cache problem because the surface looked similar.

My team trusted my judgment. I wasted their time because I confused a repeat pattern with repeat circumstances.

The Lesson

Before asking AI to improve code, understand why it works in its current form. Not just what it does, but what invisible constraints it’s solving for. Test coverage measures code execution, not correctness. And one successful optimization doesn’t guarantee the next will follow the same path.

The side quest effect

When I started learning how to code, the tutorials would lay out the main path. Do this exercise. Learn this concept. Move to the next lesson.

But I never really understood anything until I started asking what if. What if I changed this variable? What if I tried connecting these two things the tutorial kept separate?

Sometimes I’d create an endless loop and crash everything. Sometimes I’d build something surprisingly cool.

Either way, I learned more in those side quests than following the tutorial step by step.

Eventually I’d need to get back on track, so I’d save whatever I’d made and return to the official exercise.

But that code I stashed away, the stuff that wasn’t part of the lesson plan, that’s what actually taught me how things worked. The main quest gave me structure. The side quests gave understanding.​​​​​​​​​​​​​​​​

Finding the story

Yesterday I was stuck with the daily post. So I took a new approach. I asked my LLM to look at my last 10 posts and craft three wildly diverse questions to spark my next one. Not to write my next one, but to spark ideas.

It came up with three suggestions. Instead of suggesting topics or themes, it analyzed my writing patterns and asked questions that would make me think differently. Not “write about leadership” but “What invisible skill did you develop as a kid that still shapes how you solve problems as an adult?”

That question unlocked the first-grade prison game story. A moment I hadn’t thought about in years but that explains how I approach problems now.

Most AI writing tools try to replace the writer. This felt more like having a coach who asks good questions and gets you started. The process showed me something about creativity. Sometimes the block isn’t that you don’t have ideas. It’s that you don’t have the right question to unlock them.

The prison game

First grade. We’re debating how long someone should stay in “prison” during our recess game. Kids throwing out numbers, negotiating, slowly settling on something that sounds fair.

I’m sitting back, listening. After they land on their decision, I raise my hand.

“But if you get released and immediately get thrown back in jail, you’ll be in prison most of recess.”

The whole class goes quiet. But what I remember most vividly is my teacher’s response. She didn’t just acknowledge that I was right. She nodded with this look of approval that I had not experienced from an adult before.

That nod mattered more than the insight itself.

Decades later, an agile coach pulls me aside after a meeting. “I could see you spotted something but didn’t intervene. What happened?”

I’d learned to stay quiet by then. “I thought they might take offense.”

Instead of letting it slide, he taught me about questions. How to create space for people to discover rather than being told they’re wrong. How timing and framing matter as much as the observation itself.

Both saw something worth developing. They recognized that sitting back and seeing patterns isn’t just personality quirk. It’s useful. But only if you know how to surface what you’re seeing.

The teacher encouraged the instinct to speak up when something doesn’t add up. The coach taught me how to do it skillfully.

Most people miss this completely. They see someone quiet in meetings and assume disengagement. They mistake observation for passivity. But the right mentors recognize when someone’s processing differently and help them turn that into something valuable.

The world needs people who spot the problem behind the problem. But it also needs people who can recognize that pattern and teach others how to use it well.

When good ideas don't rise to the top

At an early startup in the late 2000s, we had this person who saw everything that needed fixing. Process gaps, product improvements, things no one else noticed. She’d make these detailed slides showing exactly what should change and why.

She wasn’t a product owner or VP. Just someone who cared deeply about making things better. When she showed me her work, I’d think “yeah, makes sense” and go back to executing. I wasn’t interested in strategy. I just wanted to ship.

Her ideas went nowhere.

I think about this from time to time. Should I have acted differently? Did the company really have that type of psychological safety (which wasn’t really a word yet)? I was part of the people that wrote the code that someone else told us to do.

As I grew to seniority I steadily understood that ideas and viewpoints were expected of me. Not just execution.

Startups sell this mythology that good ideas rise to the top. That flat structures mean everyone’s voice matters. But she was doing real strategic thinking that should have been someone’s full-time job, and it just evaporated because she didn’t have the title.

The waste wasn’t just her time. It was that the company actually needed what she was seeing. Someone should have been paying attention. Instead she kept making slides that disappeared into the void while people like me stayed focused on our own work.

The end of forms

A doctor sits across from a patient discussing symptoms. An AI listens. When the doctor turns to open the patient record, there’s no traditional interface waiting.

No tabs for “Current Medications” or “Previous Visits” or “Lab Results.” No forms to fill out. The screen shows exactly what this conversation needs. Lab results from last month because the patient mentioned ongoing fatigue. Previous notes about the shoulder injury because they just described similar pain.

The doctor says “show me the last few sessions as a summary.” The AI adds it to the view. Everything relevant, nothing extra.

During the appointment, the AI has been taking notes. Not transcribing everything, but capturing the medical decisions. When it’s time to document, the AI presents what it understood and asks about what it didn’t. “You mentioned adjusting the dosage but I didn’t catch the new amount.” “Should I schedule a follow-up for the chest X-ray results?”

The doctor confirms, corrects and adds context. Once. The same information doesn’t get entered in three different systems or copied across multiple forms.

The interface becomes a conversation about what happened rather than a data entry task. The AI handles the busywork of figuring out which fields need updating, which systems need notifications, which follow-ups need scheduling.

This isn’t about replacing doctors with AI. It’s about replacing forms with intelligence.

The building blocks for this future are already here. Anthropic is currently testing “Imagine with Claude,” where Claude generates software interfaces on the fly.

We’re moving toward a world where software doesn’t just respond to users. It imagines itself into existence around them.

Rebuilt in months

How many months before “create a poster” doesn’t route to Canva but generates natively in ChatGPT?

OpenAI wrapped Dev Day with Agent Builder, a visual workflow tool that does what took Zapier over a decade to build. They built it in months. Drag, drop, connect APIs, deploy agents.

At the same event, they demoed Canva integration. Presenter asked ChatGPT to make posters. Canva generated them.

The companies integrating today face a tension. They need to be where users are. But by integrating, they also show what’s possible. Canva proved design tools in ChatGPT work. Zapier’s entire model just got replicated.

When platforms can build this fast, what makes your product essential beyond the core workflow? Better templates? Deeper features? Specialized tools? The basic functionality can now be copied in months.

We’ve seen this pattern before with platforms. The difference now is speed. What took years to disrupt now takes months.

What happens when ease of use becomes the disruption vector? Your product needs to stay ahead of what can be easily replicated.

Drifting strategies

Different roles interpret the world differently. A need gets stated, passed along, reinterpreted, modified, and passed on again. Each person adds their perspective, removes what seems irrelevant to them, and continues the chain.

With each handoff, unclear variables multiply. What started as a specific problem becomes an abstract goal. What was nuanced becomes simplified. What was certain becomes assumed.

This is how visions and strategies lose traction. Someone in the chain makes a reasonable call from their point of view that shifts the entire direction. They have good intentions. They’re trying to add value. But they’re working from their interpretation, not the source.

The drift travels both directions. Each expert translates for the next expert. Everyone’s helping in their own way.