Building my own AI assistant

I got curious about the tech behind AI assistants like ClawdBot/MoltBot/OpenClaw when they showed up a few weeks ago. Ran one for a couple of hours on my machine before I got the feeling I didn’t understand the value compared to how many tokens were being used. But an idea was born and I started chatting with Claude Code about possible solutions.

Could you build an AI assistant that used Claude Code itself the way Anthropic intended, without getting blocked like OpenCode users were.

About five months ago Anthropic released the Claude Agent SDK. A way to create tools on your machine that used the internal Claude Code with all its capabilities. That solved the first problem.

The second was about security. The solution was running the assistant inside Docker. A computer inside a computer that only has access to what’s inside.

But to make it valuable it needed access to email, calendar and various other things without exposing keys or giving it direct access. That’s where the idea of a layer in between was born. A proxy written in plain code that decides what the assistant gets to see and what it doesn’t.

The layer provides access to mail, calendar and a way to use more demanding features like speech to text that need more compute. In this layer I also decide what gets filtered before it reaches the assistant and what permissions it has. Not reading certain emails, not sending emails only drafts, not creating calendar invitations with others and so on.

All development is done by me talking to Claude and it talking to the assistant about possible solutions. Then they tell me.

All communication happens through Telegram from my side. I talk and send. It receives and thinks and messages back. It reads through newsletters, sales emails and creates drafts and suggests times when a meeting could work. I tell it to block five minute slots for getting small things done. During the day and night it does research I can read through in the morning. It looks for only positive news on all the major news sites.

And this is just the beginning.

Perfect choice is sometimes not requested

The Star Trek replicator gives you anything you want, which means you have to decide what you want. Sometimes the luxury is letting someone else choose. SVT at 8pm made the decision for you. The DJ played a song you wouldn’t have picked. Infinite personalisation sounds like freedom but it’s also infinite cognitive load. In a world where AI can generate everything, the scarcity isn’t content. It’s trusted humans willing to narrow infinity down to three things worth your time.

Atoms move at atom speed

Digital content scales because failure is free. A million unread blog posts cost nothing but server space. But physical products demand validation before production. Unsold inventory is real money rotting in warehouses. Shelf space is finite. You can’t A/B test chip flavours the way you test headlines. The physical world has irreducible constraints that compress the infinite back down to the manageable. AI speeds up the digital. The atoms still move at atom speed.

AI bottleneck

AI can generate a thousand product ideas before lunch, but someone still has to tell you if any of them are good. Speed up idea generation 100x and you don’t get 100x better products. You get a queue of untested concepts waiting for the same limited pool of humans to validate them. The bottleneck isn’t creativity anymore.

UX vibes vs Code vibes

An idea I’ve dabbled with

Vibe coding all about focusing on the user experience rather than focusing on the code that makes that experience.

The non scalability issue

People argue about scalability when they see products built with AI-assisted development. They imagine what happens when the userbase grows or the codebase expands i.ez technical debt piling up.

Many products built this way solve specific problems for specific people. A tool that helps a company process their invoices. A system that generates reports for a particular team. An interface that automates one workflow that’s been eating up someone’s time.

When you build something to solve an actual problem you have right now, the constraints are already built in. You know the scope. You know the users. You know what “done” looks like.

The scalability critique applies to products trying to be everything to everyone.

Start with facts, not insights

For a couple of years I’ve had this folder called Braindump. It’s where I write my somewhat on-and-off journal, often when my brain starts getting too clogged up.

One thing I’ve noticed about journaling that makes it way easier: start with facts before insights.

“But of course” you might think. The struggle is that when we start journaling, we think we should only write profound things. We want it to become a book of deep thoughts. But our deep thoughts usually come after we’ve laid out the bare facts.

In other words, it’s way easier to start with what you did, what you were thinking about, what you saw or noticed. Then go deeper if possible.

Declarative and procedural knowledge

A thing I’ve been thinking about lately is the idea of declarative vs procedural knowledge. In other words knowledge that something exists versus knowledge how it is done.

The feeling is that AI tools are removing the need for step-by-step procedures for most things. Knowing that queues and databases exist, that there’s a musical scale called phrygian, what camera body or film stock or shallow depth of field mean. The vocabulary and concepts themselves become more important than memorizing how to execute them.

Because once you know the vocabulary, AI can handle the procedure. You need to know phrygian exists to ask for it. You need to know what shallow depth of field is to request it. But memorizing the scale pattern or calculating the f-stop becomes optional.

The generalist advantage - part three - a new role enters the arena

By the end of 2026 the highest-ROI hire at early-stage startups will be someone who doesn’t fit any existing job title. Not a PM. Not a developer. Not a designer. Someone who can do all three well enough and ship fast enough that traditional role boundaries don’t matter.

Someone with 10-20 years of experience who’s not the best developer and not the best product manager, but they see the whole product. They have the helicopter view that comes from doing this for decades. They know what actually matters versus what’s theater.

A year ago this person was a unicorn hire. They existed but had to choose where to spend their time. AI makes it achievable now. Not because AI replaces experience, but because it amplifies it.

Claude Code can write the checkout flow but it can’t tell you that adding one more step will kill conversion. It doesn’t have the scar tissue from shipping products that failed in interesting ways. Gut feeling still matters and gut feeling comes from experience.

The startups of the future will need fewer people and those people will be generalists. It’s a way to have a longer runway and a way to move faster than your competitors.

This is the time to be a generalist.​​​​​​​​​​​​​​​​