Why I Codified Everything and Nearly Lost the Feel
Too much structure, the body disappears. The overcorrection story.
The voice protocol has a rule: zero negation-affirmation. “Not X. Y.” is the number one AI tell. Every LLM defaults to it. The rule is correct. The pattern is instantly recognizable as machine-generated.
The problem is that I talk that way. “That wasn’t a design problem. It was a governance problem.” Natural speech. My pattern, not the model’s. But the protocol caught it every time and flagged it as an AI tell. The system I built to protect my voice was erasing my voice.
This is the overcorrection story.
I codified everything. Voice protocol with hard rules: zero em dashes, zero banned words, locked sentence patterns, a register calibrated against how I actually talk in conversation. Evaluation lenses extracted from specific practitioners. A verification checklist with thirteen items. A no-hallucination policy that prevents the system from inventing any detail. Every piece of governance was correct at the rule level.
Together they produced copy that was perfectly governed and sounded like nobody. The discipline was visible. The person was missing. Each rule individually protected fidelity. The stack of rules together compressed the natural variation that makes writing sound human.
I noticed it when I read three essays in sequence and they all had the same temperature. No hot spots, no rough edges, no sentences where the thought arrived somewhere different from where it started. Every paragraph resolved cleanly. Every transition bridged. The voice was compliant and flat. That uniform smoothness is itself an AI tell, and I’d built the infrastructure that produced it.
The fix was not looser rules. The rules are right. Em dashes are a real AI tell. Epigrammatic closers are a real problem. Banned words are banned for real reasons. Removing the rules would reintroduce the failures they catch.
The fix was human override. Governance that informs while the human decides. The negation-affirmation rule still flags the pattern. But I look at each flag and decide: is this my pattern or the model’s? If it’s mine, it stays. If the model generated it, it gets rewritten. The rule went from enforcement to advisory. The human is the final authority.
This changed how I think about all the governance I build. The Formwork Protocol was always designed this way: where personas disagree, the maker chooses. The voice protocol had gotten ahead of that principle. It was enforcing without presenting the choice. The system was doing the deciding instead of surfacing the decision.
I think about the classroom analogy. A behavior management system that’s too rigid catches the student who was protecting a friend alongside the student who was genuinely disrupting. Both get flagged. Both get the same consequence. The system can’t tell the difference because the system doesn’t read context. The teacher reads context. The system informs. The teacher decides.
The governance I trust now follows one principle: make the standard visible, catch deviation, present the findings, let the human decide. The protocol still has zero em dashes. It still catches negation-affirmation. But the human override is structural, not an exception. Every flag is a recommendation, not a verdict. I nearly lost the feel by building governance that didn’t leave room for the person it was supposed to protect.