What I Built This Year
I needed a real project to develop and test the methodology, so I built this site as a working laboratory. Here's how each piece works.
I spent the last year building a way of working with AI that solves problems I kept running into and nobody seemed to be addressing. I needed a real project to develop and test the methodology, so I built this site as a working laboratory. The governance system runs on its own output — the site you’re reading is both the testbed and the proof.
The system governs how AI assists my creative and engineering work. It extracts evaluative frameworks from real practitioners’ bodies of work and codifies them as testable diagnostic lenses. It derives voice governance from how I actually talk in conversations, not from published writing. It runs these diagnostics in parallel, surfaces where they agree and where they disagree, and leaves the decisions to me. Every specific claim traces back to verified source material — the system doesn’t invent details about the work.
I looked for prior art. I couldn’t find anyone else doing this specific combination: persona extraction into testable lenses, voice governance from conversational patterns, and a coordinator architecture that runs these together and surfaces the tensions between them. Individual pieces have analogues. The combination doesn’t.
I built this because I needed it. I’m writing it down because I looked for it and it wasn’t there.
These four posts explain how each piece works and why I built it.
Persona extraction is not “act as”
Everyone using AI for creative evaluation has tried some version of “act as Vignelli” or “evaluate this like Dieter Rams would.” The results are caricature. The AI produces a surface impression based on what’s most commonly written about that person, not the evaluative instinct underneath their visible decisions.
I built something different. I studied the output, extracted the framework (what questions does this person consistently ask? what do they never tolerate?), and codified it as testable criteria that produce clear verdicts against real work. The lenses validate against work the original practitioner produced or praised. That’s the difference between “act as” and an actual diagnostic tool.
Read the full post: Persona Extraction
AI copy sounds the same because the training data is performing
Every AI tool writes in the same voice because it learned from published writing, and published writing is a performance. The way someone writes a blog post or a LinkedIn update is not how they actually think and talk. Their real voice is in their conversations, their working notes, their unguarded explanations.
I built a pipeline that starts with 1,643 ChatGPT conversations, Claude Code session transcripts, and exports from every AI tool I’ve used over three years. From that raw material, a voice-sampling skill extracts how I actually talk: my sentence rhythm, my opening moves, my vocabulary, my humor. That becomes a governance constraint on every piece of copy the system produces. Then a 12-item verification checklist catches violations before anything publishes.
Read the full post: Voice Governance
The architecture that ties it together
The individual skills are useful on their own. The architecture is what makes the combination work. Coordinators dispatch independent diagnostics in parallel — structural lenses evaluating craft, narrative lenses evaluating identity, voice diagnostics evaluating whether the writing sounds like me. A convergence engine reads where they agree and where they disagree. The agreements are signals to act on. The disagreements are decisions I make.
The result is a critique room I can reconstruct on demand. Multiple perspectives, specific verdicts, the tensions surfaced and the choices left to the maker.
Read the full post: The Integrated System
Where it started
Before any of this existed, I was losing my thinking across sessions. 1,643 ChatGPT conversations in two years, and every time the context window closed, the reasoning and connections between ideas evaporated. A survival reflex (typing “give me a savepoint” before sessions ended) became a protocol, the protocol led to better tools, and the better tools made everything else possible.
Read the full post: I Needed a Better Tool
The methodology behind all of this is documented on the Formwork Protocol page on this site. The protocol describes the approach — persona extraction, multi-plate evaluation, convergence. The implementation is mine. The protocol transfers to anyone. The implementation is specific to what my work needs.
I built this because I needed it. I’m writing it down because I looked for it and it wasn’t there.