When good ideas don't rise to the top

At an early startup in the late 2000s, we had this person who saw everything that needed fixing. Process gaps, product improvements, things no one else noticed. She’d make these detailed slides showing exactly what should change and why.

She wasn’t a product owner or VP. Just someone who cared deeply about making things better. When she showed me her work, I’d think “yeah, makes sense” and go back to executing. I wasn’t interested in strategy. I just wanted to ship.

Her ideas went nowhere.

I think about this from time to time. Should I have acted differently? Did the company really have that type of psychological safety (which wasn’t really a word yet)? I was part of the people that wrote the code that someone else told us to do.

As I grew to seniority I steadily understood that ideas and viewpoints were expected of me. Not just execution.

Startups sell this mythology that good ideas rise to the top. That flat structures mean everyone’s voice matters. But she was doing real strategic thinking that should have been someone’s full-time job, and it just evaporated because she didn’t have the title.

The waste wasn’t just her time. It was that the company actually needed what she was seeing. Someone should have been paying attention. Instead she kept making slides that disappeared into the void while people like me stayed focused on our own work.

The end of forms

A doctor sits across from a patient discussing symptoms. An AI listens. When the doctor turns to open the patient record, there’s no traditional interface waiting.

No tabs for “Current Medications” or “Previous Visits” or “Lab Results.” No forms to fill out. The screen shows exactly what this conversation needs. Lab results from last month because the patient mentioned ongoing fatigue. Previous notes about the shoulder injury because they just described similar pain.

The doctor says “show me the last few sessions as a summary.” The AI adds it to the view. Everything relevant, nothing extra.

During the appointment, the AI has been taking notes. Not transcribing everything, but capturing the medical decisions. When it’s time to document, the AI presents what it understood and asks about what it didn’t. “You mentioned adjusting the dosage but I didn’t catch the new amount.” “Should I schedule a follow-up for the chest X-ray results?”

The doctor confirms, corrects and adds context. Once. The same information doesn’t get entered in three different systems or copied across multiple forms.

The interface becomes a conversation about what happened rather than a data entry task. The AI handles the busywork of figuring out which fields need updating, which systems need notifications, which follow-ups need scheduling.

This isn’t about replacing doctors with AI. It’s about replacing forms with intelligence.

The building blocks for this future are already here. Anthropic is currently testing “Imagine with Claude,” where Claude generates software interfaces on the fly.

We’re moving toward a world where software doesn’t just respond to users. It imagines itself into existence around them.

Rebuilt in months

How many months before “create a poster” doesn’t route to Canva but generates natively in ChatGPT?

OpenAI wrapped Dev Day with Agent Builder, a visual workflow tool that does what took Zapier over a decade to build. They built it in months. Drag, drop, connect APIs, deploy agents.

At the same event, they demoed Canva integration. Presenter asked ChatGPT to make posters. Canva generated them.

The companies integrating today face a tension. They need to be where users are. But by integrating, they also show what’s possible. Canva proved design tools in ChatGPT work. Zapier’s entire model just got replicated.

When platforms can build this fast, what makes your product essential beyond the core workflow? Better templates? Deeper features? Specialized tools? The basic functionality can now be copied in months.

We’ve seen this pattern before with platforms. The difference now is speed. What took years to disrupt now takes months.

What happens when ease of use becomes the disruption vector? Your product needs to stay ahead of what can be easily replicated.

Drifting strategies

Different roles interpret the world differently. A need gets stated, passed along, reinterpreted, modified, and passed on again. Each person adds their perspective, removes what seems irrelevant to them, and continues the chain.

With each handoff, unclear variables multiply. What started as a specific problem becomes an abstract goal. What was nuanced becomes simplified. What was certain becomes assumed.

This is how visions and strategies lose traction. Someone in the chain makes a reasonable call from their point of view that shifts the entire direction. They have good intentions. They’re trying to add value. But they’re working from their interpretation, not the source.

The drift travels both directions. Each expert translates for the next expert. Everyone’s helping in their own way.

The authenticity filter

A company’s real culture comes through the lens of its employees. What I mean by that is your LinkedIn company account can’t escape the authenticity filter.

Post a team photo and people see a recruitment ad. Share a milestone and people see marketing. The corporate logo at the top primes everyone to read it as promotional before they even start.

When someone posts on their own personal account about solving an interesting problem or shares expertise from their work, the company affiliation becomes a discovery, not a pitch. People look up where you work because they’re curious, not because you told them to.

The companies that get this right do something counterintuitive. They stop trying to turn employees into megaphones. Instead they support the people who actually want to build their professional presence and share what they know.

Not everyone needs to be an influencer. But when you have people who genuinely want to share their work, let them focus on building their own value. That attracts better people than any company post ever could.

Vibe Coding my way to a better cache

I couldn’t shake the feeling our cache wasn’t optimal. We’ve been using it for years and whenever we saw spikes in CPU usage, this was my main scapegoat.

The Problem

Our CloudRun service was burning CPU cycles. Our existing cache solution was handling around 2000 articles cycling through memory every 60 seconds, with TTL and check periods both set to 60 seconds. It worked fine, but it wasn’t optimal and something felt off.

I started looking at alternatives and landed on one of the most popular cache packages. Installed it and had Claude Code set up tests that mirrored our real usage patterns. After comparing them head-to-head, the performance difference? Negligible. Both handled our use case about the same.

But something still felt off. As I said… it was my scapegoat.

The Investigation

Instead of letting it slide and keeping the older package, I asked Claude Code to explain exactly how each approach worked under the hood. That’s when the problems became obvious.

Our current solution stored everything in one massive object. At every check period, it would serialize and deserialize the entire thing, then traverse all 2000 entries checking TTL values. Imagine doing a full table scan on your database every minute just to clean up expired records.

The alternative took the opposite route. It created individual timeout handlers for each cache entry. 2000 articles meant 2000 active timers, each adding its own overhead to the event loop.

Both solutions were solving the wrong problem for us. One was doing too much work per check, the other was creating too many workers. The hunch about the existing cache being suboptimal was right, but the alternative wasn’t much better. CPU-wise or memory-wise.

The Solution

What if I could separate the concerns? One object to track keys and their TTL values, another to store the actual data. Run a single check period that only looks at the TTL object.

When an entry expires, delete from both objects. When you need data, check the TTL object first, then fetch from the data object if it’s still valid.

Yes, it uses marginally more memory. But memory is cheap on CloudRun. CPU cycles are expensive.

I asked Claude Code to build this approach and create a comprehensive test suite. We borrowed test patterns from the existing library to ensure compatibility, then ran everything in random order to catch any hidden dependencies.

The code worked immediately. Claude Code reported some astronomical speedup number (412x faster, which was probably measuring some micro-operation), but the real-world impact was what mattered: CPU usage dropped from 50% to 25%.

This is what vibe coding enables. You have a gut feeling that something’s not right, so you dig deeper instead of accepting the obvious solution. You ask questions about implementation details that are easily skipped. You trust your intuition about performance characteristics and let AI help you explore alternatives rapidly.

Neither approach was bad. Our existing solution had served us well for years. The alternative was popular for good reasons. They just weren’t optimized for this specific pattern. One prioritized simplicity of implementation. The other prioritized individual entry control. Neither prioritized bulk expiration efficiency.

The key was asking the right questions:

  • How exactly does TTL checking work?
  • Where are the CPU cycles actually going?
  • What happens during the check period?
  • Can we separate the expensive operations from the frequent ones?

Since the two existing packages performed the same we could have easily just ignored upgrading. But that nagging feeling about optimization was pointing toward something better.

The Lesson

I haven’t released this as a package because it’s deeply integrated into our codebase. It makes assumptions about our specific use patterns and doesn’t handle edge cases that a general-purpose library would need to support.

That’s the trade-off with vibe coding. You can build exactly what you need, optimized for your constraints, but it’s not necessarily portable. Sometimes the perfect solution for your problem isn’t the right solution for everyone else’s problem.

The lesson isn’t that you should always build custom cache solutions. It’s that you should understand your tools well enough to know when they’re not quite right, and have the confidence to explore alternatives when your gut says there’s a better way.

No nice to haves

If you down-prioritize all the nice-to-haves in your products then your product will not be nice-to-have.

Copy, steal, follow or borrow your why

Simon Sinek tells us to start with why. To identify our passion and discover our purpose.

But you don’t need to come up with the why. You can just find someone else’s why.

Copy, steal, follow or borrow.

Most people who’ve contributed to meaningful causes didn’t invent them. They found something that mattered and signed up. The civil rights activist who joined an existing movement. The researcher working on someone else’s hypothesis. The employee building someone else’s company.

Looking for YOUR unique purpose usually leads to paralysis. Meanwhile someone else’s why is already out there with infrastructure, community and momentum.

So if you’re stuck searching, look around at the whys that already exist. Find one that seems important. Then just start.

Your why doesn’t need to be original. It just needs to be enough to get you moving.​​​​​​​​​​​​​​​​

Make it groove

Back when I made music with basic drum machines, everything sat perfectly on the grid. Mathematically correct. Completely lifeless.

You had to manually add swing to push the 16th notes slightly forward. That gave you groove. The offbeats shifted later in time and suddenly the rhythm had pocket. Without swing, hi-hats and snares hit with robotic precision that made everything feel stiff.

Now tools like modern DAWs do this automatically. They randomize both timing and velocity. Some notes rush a bit, others drag. Some hits are louder, others softer. The grid breaks just enough to sound human.

AI writing has the same problem. It sits perfectly on the grid. Every point explained. Every connection spelled out. No space for the reader to fill in gaps.

The fix is similar. Strip out everything someone can read between the lines. Let some ideas hit harder than others. Add your personal stories because those are the velocity changes AI can’t generate.

Take something rigid and mathematical. Make it groove.

Target vs Market: Legacy-driven broadness

Existing companies make the opposite mistake of new companies but end up in the same place. “We have 50,000 customers so our new feature needs to work for all 50,000.” This sounds logical until you realize those customers probably break into distinct groups with completely different needs.

Building for everyone means the feature ends up mediocre for everyone. Instead of transformative for a meaningful subset who become your internal advocates. A useful reframe is asking which 5,000 of your 50,000 customers would get the most value from this new capability. Start there, nail it then expand. The other 45,000 aren’t going anywhere.

Legacy-driven broadness happens because companies mistake their current customer base for their target market. But your customer base is the result of all your previous targeting decisions. Your new feature doesn’t need to serve that entire base. It needs to serve the people who have the problem you’re trying to solve.

The irony is that the “safe” approach of building for your entire market is a riskier strategy. You’re spreading resources thin across a dozen half-solutions instead of creating one solution that people can’t live without.