Prompt files are the new dotfiles

If AI can write any code, then the real value is in understanding systems well enough to give the right instructions.

Top developers will focus on having great prompt files that explain how they work. The tools they like, the APIs they use. How they want code to be tested and written.

It will be the dot-files of the future. Personal config files that define your entire coding style and follow you from project to project.

Sure, you can copy someone else’s prompt files. But to really benefit, you need to understand and own them, just like traditional dotfiles.

Anyone can ask ChatGPT to write a React component. But not everyone knows how to ask for error handling, team patterns, and optimization for their users.

Stop waiting for comfortable

Just stop waiting to be comfortable. There will never be a time when you’re so comfortable that it’s easy to start. Don’t put expectations on your surroundings before you get started.

Do I need a perfect synthesizer? Do I need to sit in a special chair? Do all the kids need to be in bed and asleep? Do I need a two hour window?

Stop waiting to be comfortable.

We're supposed to be tech experts

I’ve noticed something interesting with tech companies and their websites. They often struggle to keep their own sites updated because they built everything custom from scratch.

It’s rarely about actual technical requirements. There’s this unspoken feeling that using existing tools might make them look less skilled as developers.

The result is predictably frustrating. While others focus on shipping products, these teams struggle to even update their website with new features because their custom CMS has become a maintenance burden that multiple developers have touched over time.

It’s one of those situations where professional pride works against business outcomes.

Domain names will matter more than SEO

The connection with people might be the only thing that will set one company apart from another in the near future.

When everyone else focuses on making it easier for ChatGPT to index all their content and hope that it picks theirs.

The only way for people to really get to know your brand will be for them to know how they can find your brand.

I predict a surge in ads talking about domain names in the near future.

The priority inversion trap

I’ve watched several companies try the same clever approach: build a product while running a consultancy. Use downtime to work on the product. Makes sense, right?

The logic is simple. When developers aren’t busy with client work, they focus on building something that could eventually become the main business. Sounds like smart resource utilization.

But here’s where it gets interesting.

The challenge always became prioritization. How do you decide what these “spare time” developers should work on? You can’t give them the most critical features because what if a client project comes up and pulls them away?

So you end up giving them the less important stuff. The nice to have features. The experimental work. Tasks without real deadlines or with artificial ones just to create some focus.

That sounds reasonable. Low risk, low commitment work for uncertain availability.

But then you hit the handoff problem.

Eventually, this side work needs to integrate with the main product. The core team (the one working on actual priorities) has to stop what they’re doing to review, understand and approve the work from the “spare time” team.

That’s when things get messy.

You end up with a priority inversion. Your high-priority work gets deprioritized so you can process the low-priority work you assigned to fill downtime.

The thing you said was most important gets pushed aside to handle the thing you said was least important.

I’ve seen this pattern multiple times now. It always looks smart on paper. Use every available hour. Keep people productive. Build toward the future.

But coordination isn’t free. Context switching isn’t free. Code review isn’t free.

The overhead of managing “spare time” work often costs more than the value it creates. You’re not just losing the output, you’re actively making your core work less efficient.

It’s one of those management ideas that optimizes for the wrong thing. Instead of optimizing for maximum utilization, you should optimize for maximum progress on what actually matters.

There are plenty of valuable things to do with spare time. Code reviews, documentation, experimenting with existing features, learning new skills, hanging out with the support team. The list goes on.

Just don’t pretend you can build your core product features in the margins.

Make-work syndrome

When development teams are organized around services that mostly need maintenance, they often end up reinventing the product instead. Why? Because maintenance work feels boring compared to building something new.

AI adoption follows the usual pattern

McKinsey’s latest research shows 80% of companies aren’t seeing meaningful returns from their AI investments while only 17% are seeing real results.

But here’s what I find interesting about that number.

Most major technologies follow this adoption curve: 2-3 years of experimentation and learning followed by gradual scaling and measurable returns in years 3-7.

GenAI is actually moving faster than typical but the timeline for measurable enterprise-wide returns appears totally consistent with other major technology shifts.

The 80% figure isn’t a GenAI problem. It’s the normal pattern for transformative technologies.

We’ve seen this before.

ERP systems took 2-5 years to show significant ROI. Cloud computing took years before most companies saw meaningful returns.

GenAI has been widely available for just 2 years.

If you’re struggling to show enterprise-wide returns from AI right now, you’re not behind schedule.

You’re right on time.

Having a conversation with my own writing

I’ve been writing every day this year.

Today I fired up Claude Code and added an MCP to my blog.

For the non-technical folks: I basically gave Claude direct access to all my writing. It can now read through everything I’ve written, search for patterns, find connections I missed.

The tool gave me something I didn’t expect, a sort of curiosity about my own work.

I used to Google my own site to find a specific post I remembered. Now I can ask “What was I thinking about in March that I didn’t fully explore?” or “Show me where I keep iterating on the same idea but in different ways.”

It’s like the difference between having a messy room and well… having a messy room with a really good search function. Same stuff, completely different relationship to what’s there.

Now I have a way to be genuinely curious about what I’ve created. To ask it questions. To explore it like someone else’s work.

The kid who needed help with group work

I just read a study called “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.

Researchers had students write essays using ChatGPT while measuring their brain activity. No training, no guidance. Just figure it out.

Students who started with AI first? Their brains basically went to sleep. But students who thought through ideas themselves THEN used ChatGPT? Their neural networks lit up stronger than ever.

This reminded me of that kid from school group projects. Whenever we had a group assignment he just hung around. Not really invested in discussions or well… interested at all. He performed poorly at the tests and presentations.

The study found AI-first students couldn’t even remember sentences they’d just written. But the think-first-then-collaborate kids? Strongest brain activity of all.

Working with AI is like a group assignment and this kid struggled with it. The study somewhat proved that we need different support. Similar to how that kid needed support.

Finding my voice through AI

As an engineer and CTO, I delegate constantly. I use JavaScript packages instead of writing functions from scratch. I build on frameworks rather than coding everything myself.  I rely on CDNs instead of building my own content delivery network.

That’s exactly how I approach AI in writing.

I used to struggle getting my thoughts down clearly. My ideas would get muddled somewhere between my brain and the page. AI became my thinking partner. Someone to bounce ideas off of and help me get my thoughts crisp and tight.

But here’s the crucial distinction: I still want my ideas and stories to shine. Not fabricated ones. What’s truly unique is my viewpoint and experience. Not my writing craft.

I found my voice not despite using AI, but partly because of it. The tool helped me see patterns in my thinking. Pushed me to be clearer. Gave me confidence to share ideas I might have kept to myself.

The fear is that AI will make us all sound the same. That only happens if you treat it like a vending machine. Generic prompts get generic content. Use it as a thinking partner and you get something else entirely.

Your voice isn’t just how you write. It’s what you choose to write about. How you think about problems.