Internal soundtracks
Your thoughts are the internal soundtracks you listen to even more than your favorite song. - Jon Acuff - Soundtracks
Your thoughts are the internal soundtracks you listen to even more than your favorite song. - Jon Acuff - Soundtracks
I was visiting New York and the offices of a company I was working with. It was one of those startups that poured down millions in making it comfortable to relocate by having a super nice office that you actually liked hanging in and a bunch of free stuff. Drinks, candy, breakfast, lunch. They even had cold brew on tap. I loved it, cold coffee.
One day during lunch a newly opened restaurant catered sandwiches. And I recall so vividly this guy complaining on the company email about his experience with the sandwich, how he bashed the restaurant and that there was too much mayonnaise and so on.
That’s when I started using the sentence “f*@$ing free food” to somehow visualize the absurdity in complaining about something free. “I know… f&%#ing free food… am I right?”
This is the same feeling that comes up when someone complains about social networks like LinkedIn. People say it has gone worse, that it no longer suits their needs. It’s all sales and cold outreach. Recruiters who don’t read your resume pitch you odd jobs. AI slop everywhere. Productivity bros and gals pumping out content.
We somehow feel entitled to a glorious experience just because we’ve invested a couple of hours on a weekly basis approving some contacts.
We could always leave. Build our own websites. Send actual emails to people we want to talk to. Go to real conferences and shake real hands. But we won’t, because for all the complaints LinkedIn still does the work of keeping us visible and connected without us having to do much of anything.
“I know… f%*&ing free social media… am I right?“
There’s a lot of talk about AI from me lately. But I think that comes from this awesome feeling of tinkering. We don’t quite know what the benefits are yet or how we’ll use it. So there’s no real plan—just curiosity. You test and try what works. Repeat. Do it again. A new model comes out. Repeat. That’s what I love about it.
I can build a thousand small apps that suit whatever need I have in the moment. Then I can break them apart to see what parts can be reused elsewhere. It’s the tinkering mindset that really draws me to this new generative AI world.
Let the LLM write the code. The first version might be too big, but that’s fine. It figured out the problem by writing the solution. That’s how autocomplete works.
Now ask it to simplify. Make it smaller, cleaner, easier to understand.
This mirrors Kent Beck’s classic approach: make it work, then make it right.
There is difference between coding and engineering. Coding is about writing code. Engineering is understanding that piece of code in its bigger context.
Before 1901, Ohio State had a grassy field between their library and University Hall. No paths laid down yet. Students walked to class anyway, carving routes through the grass as they went. Over time those worn trails became permanent, and eventually the university paved them. The geometric pattern they form is still there today, defining The Oval at the center of campus.
Product builders do something similar when they make tools flexible enough that people can bend them in unexpected ways. Boris Cherny at Anthropic calls this finding latent demand. You build something hackable and open-ended, then watch how people abuse it for cases you never designed for. Those unintended uses show you what people actually want. That’s apparently how Claude Code gets built. They watch what people hack together, then build features to support those patterns.
Both approaches are trusting the same thing. That observation beats assumption. That the people using something will show you what they need if you give them room to move and then pay attention to where they go.
The impact is remarkably similar too. You end up building what people actually need instead of what you thought they needed. The paths get laid where people already walk. The features get built where people are already working.
When gen AI is “thinking,” it’s deciding which part of its knowledge to search more carefully. Like a librarian checking a specific shelf instead of running around grabbing random books.
Two years ago we called this prompt engineering. We told the chatbot where to look. Now it figures that out itself. That’s the main difference.
The thinking lets it course-correct too. It can start at one shelf, realize that’s wrong, and move to another. With prompt engineering we had to guess right the first time.
Build products from real friction, not from ideas that sound good.
The current AI boom feels like the late 90s, reminiscent of the GeoCities era when anyone with Notepad and a few lines of HTML could create history. Now everyone’s building tools using AI in their spare time. That was a crazy time to be alive, and this is similar.
Infrastructure doesn’t need to be planned. It can emerge from experiments when keeping them around costs nothing.
I pay $5 a month for Cloudflare Workers. That gives me room for hundreds of microservices. So when I build something, I just add another one.
Right now I’m building LLM-specific microservices. One for Claude, one for OpenAI, one for each model I use. Could I build one service that handles all of them? Sure. But that gets complex. Separate services means separate simplicity.
No roadmap. No grand design. Just solving problems and keeping the solutions around.