Building Tools to Stay Human
The punk position: uses AI every day AND built tools to protect the human.
I use AI every day. I’ve built over sixty skills, a half-dozen coordinator agents, voice pipelines, evaluation lenses, traversal systems. The entire corpus of my published writing was compiled through AI tools. My daily workflow depends on them.
I also built every one of those tools to protect the parts of the work that have to stay human.
That’s not a contradiction. It’s the whole position.
The voice protocol exists because language models flatten prose. Left to default, they produce competent, even, lifeless copy. Generic constructions, hedging phrases, a register that could belong to anyone. The protocol bans specific words, enforces specific sentence structures, and runs a verification pass against my actual speaking patterns. The tool doesn’t write like me. The tool gets checked against me.
SavePoint Syntax exists because I lost months of thinking. The novel, the business architecture, ideation sessions that produced real breakthroughs. The sessions ended. The context evaporated. The model started fresh every time. SavePoint doesn’t make the model remember. It gives me a way to carry the decisions forward myself, and give the model enough orientation to pick up where the last session left off.
FormWork exists because compound evaluation collapses. Ask a model to evaluate something against twelve criteria simultaneously, and it blends them into a smooth average. The rough edges disappear. The specific failures get lost. FormWork separates the lenses. One dimension at a time, one verdict at a time. The model does the analysis. I read the results and make the call.
The compilation pipeline exists because I don’t write from blank pages. I talk, argue with myself, change direction mid-sentence, contradict something I said twenty minutes ago. The tools absorb that mess, find the structure inside it, and organize it for review. But the thinking that went into the mess is mine. The word choice embedded in the raw material is mine. The tools do compute. I do composition.
Every tool I’ve built follows the same logic: identify what the machine does well (processing, retrieval, pattern extraction, speed), identify what the human does well (intent, meaning, judgment, reading the room), and make sure the boundary between them is clear. The machine gets faster. The machine gets more capable. The tools I build get better. None of that changes what the boundary is.
I think this is punk infrastructure. Not because it’s anti-technology. Because it refuses the default trajectory. The default says: the machine gets smarter, so let it do more. Hand it the judgment calls. Let it write the copy. Let it evaluate the design. Let it make the creative decisions so you can focus on other things. That trajectory ends with the human as a supervisor of machines doing human work. I don’t want to supervise. I want to make things. The tools exist so I can keep making things faster and at greater depth, without handing over the part that matters.
The people building AI right now are mostly focused on capability. Can it do more? Can it do it better? Can it do it without the human? I’m focused on a different question: can I build the tools so the human stays in the loop at the right layer, doing the work only a human can do, while the machine handles everything else?
That’s not a bet against AI. It’s a bet on what the human is actually for.