AI adoption follows the usual pattern

McKinsey’s latest research shows 80% of companies aren’t seeing meaningful returns from their AI investments while only 17% are seeing real results.

But here’s what I find interesting about that number.

Most major technologies follow this adoption curve: 2-3 years of experimentation and learning followed by gradual scaling and measurable returns in years 3-7.

GenAI is actually moving faster than typical but the timeline for measurable enterprise-wide returns appears totally consistent with other major technology shifts.

The 80% figure isn’t a GenAI problem. It’s the normal pattern for transformative technologies.

We’ve seen this before.

ERP systems took 2-5 years to show significant ROI. Cloud computing took years before most companies saw meaningful returns.

GenAI has been widely available for just 2 years.

If you’re struggling to show enterprise-wide returns from AI right now, you’re not behind schedule.

You’re right on time.

Having a conversation with my own writing

I’ve been writing every day this year.

Today I fired up Claude Code and added an MCP to my blog.

For the non-technical folks: I basically gave Claude direct access to all my writing. It can now read through everything I’ve written, search for patterns, find connections I missed.

The tool gave me something I didn’t expect, a sort of curiosity about my own work.

I used to Google my own site to find a specific post I remembered. Now I can ask “What was I thinking about in March that I didn’t fully explore?” or “Show me where I keep iterating on the same idea but in different ways.”

It’s like the difference between having a messy room and well… having a messy room with a really good search function. Same stuff, completely different relationship to what’s there.

Now I have a way to be genuinely curious about what I’ve created. To ask it questions. To explore it like someone else’s work.

The kid who needed help with group work

I just read a study called “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.

Researchers had students write essays using ChatGPT while measuring their brain activity. No training, no guidance. Just figure it out.

Students who started with AI first? Their brains basically went to sleep. But students who thought through ideas themselves THEN used ChatGPT? Their neural networks lit up stronger than ever.

This reminded me of that kid from school group projects. Whenever we had a group assignment he just hung around. Not really invested in discussions or well… interested at all. He performed poorly at the tests and presentations.

The study found AI-first students couldn’t even remember sentences they’d just written. But the think-first-then-collaborate kids? Strongest brain activity of all.

Working with AI is like a group assignment and this kid struggled with it. The study somewhat proved that we need different support. Similar to how that kid needed support.

Finding my voice through AI

As an engineer and CTO, I delegate constantly. I use JavaScript packages instead of writing functions from scratch. I build on frameworks rather than coding everything myself.  I rely on CDNs instead of building my own content delivery network.

That’s exactly how I approach AI in writing.

I used to struggle getting my thoughts down clearly. My ideas would get muddled somewhere between my brain and the page. AI became my thinking partner. Someone to bounce ideas off of and help me get my thoughts crisp and tight.

But here’s the crucial distinction: I still want my ideas and stories to shine. Not fabricated ones. What’s truly unique is my viewpoint and experience. Not my writing craft.

I found my voice not despite using AI, but partly because of it. The tool helped me see patterns in my thinking. Pushed me to be clearer. Gave me confidence to share ideas I might have kept to myself.

The fear is that AI will make us all sound the same. That only happens if you treat it like a vending machine. Generic prompts get generic content. Use it as a thinking partner and you get something else entirely.

Your voice isn’t just how you write. It’s what you choose to write about. How you think about problems.

Web search is coming full circle

The web is coming full circle.

We’ve been moving toward ‘Google Zero’ for years, where people get their answers directly from search results without ever clicking through to websites. Featured snippets, knowledge panels, instant answers. AI overviews are just the logical next step in this progression.

Are we seeing a return to the early web’s “answers upfront” approach? I’m thinking Yahoo’s curated portals, but with AI doing the curation automatically.

For the longest time we used “google us” as a verification that we were legit, and taught our users that that was the way to go. The problem was that the model got corrupted on the way. SEO spam and sponsored results meant you weren’t exploring authentic sources anymore.

AI overviews aren’t just about creators losing traffic. They’re a correction to a discovery system that had already broken.

Brand awareness becomes even more important. People need to know how to come to your site directly when they want your specific view of the story, not just the AI’s summary.

Metrics before market fit

We obsess over metrics and minuscule shifts when the product market fit has not showed itself.

We focus on 2% shifts in conversion rates while our products have not found their long term audience.

Cut through the complexity

The best communicators don’t hide behind complexity.

They cut through it.

They don’t use complex jargon.

They adapt their communication to the listener.

They explain complexity in simple ways.

Human decision-making happens in the negative space

AI struggles because human decision-making happens in the negative space.

The things we don’t do, the paths we don’t take, the features we don’t build.

Think about a senior developer looking at a junior’s code and saying “this works, but it’s doing too much.”

Or a designer choosing not to add that extra button because it would clutter the experience.

A product manager who says no to a feature request because it will dilute the core value proposition.

Those decisions to constrain, to say no, to leave things out. They’re often what separates good work from great work.

AI can generate endless possibilities, but it can’t feel the weight of choosing restraint.

Em dashes are the Sharknado of text

The more I think about it, em dashes are the Sharknado of writing.

What 30 years ago would have been a crazy feat now feels like obvious special effects.

You know that moment in a movie when the CGI gets so obvious it breaks your immersion? That’s what’s happening with em dashes in 2025. We’ve developed “AI blindness” where our brains filter out patterns that feel artificial.

The real issue isn’t the punctuation itself. It’s the moment readers start questioning authenticity instead of engaging with ideas.

We’re creating an “authenticity uncanny valley.”

Meanwhile professional writers have been loving em dashes since long before ChatGPT existed. But now they’re second-guessing themselves, worried their natural style might trigger someone’s AI detector.

The same technique goes from impressive to eye-roll-inducing purely based on audience expectations, not because the thing itself has changed.

The feedback loop is absurd: the more people avoid certain patterns to seem “authentic,” the more those patterns become markers of inauthenticity.

Welcome to 2025, where even punctuation has an uncanny valley.

Mastering the Ask Era

In Back to the Future Part II, Marty McFly walks into Cafe 80’s in 2015 and plays an old arcade game. He’s pretty good at it. But when he finishes, a couple of kids watching him are baffled: “You mean you have to use your hands? That’s like a baby’s toy!”


Something big has changed in how we find information. Before, we searched Google, clicked through a bunch of links, and pieced everything together ourselves. Now we just ask a question and get an answer we can improve through conversation.

AI that can search isn’t just another tool. It’s a completely different way of working with information.

The Old Way: Search and Put It Together

This was the Google era. You got good at turning your questions into search terms, quickly scanning results, and opening way too many browser tabs. If you were really organized, you saved links for later.

Then you did the hard work of reading through different websites, figuring out what was useful, and putting it all together in your head to get your answer.

You learned to spot good sources. Not just the facts, but you could tell when a website was sketchy because it had more ads than actual content. You got better at seeing patterns across different articles.

The New Way: Ask and Check

Now that we’re in the conversation era, your main skill is asking good questions. Instead of hunting through search results and trying to avoid all the sponsored links at the top, you’re talking to an AI that can instantly pull together information from the entire internet, your own files, and your company’s data.

Your brain does completely different work now. Instead of collecting puzzle pieces, you’re checking whether the completed puzzle looks right. You need to quickly figure out if an answer is complete, correct, and actually helpful for what you need.

The question isn’t whether you’ll make this switch. It’s how quickly you’ll realize that how you work with information has changed forever.