Computer scientists think about model architecture. Business strategists think about risk. Design practitioners think about the person using the system and the quality of what it produces.

That is governance.

I’ve been decomposing complex systems for twenty-five years. Breaking a recruiting platform into modular components. Breaking a brand identity into plates that register independently. Breaking a novel into beat specs across three simultaneous dimensions. Breaking a classroom instruction into scaffolded steps that match a student’s processing profile. The material changes. The decomposition is the same operation: take the whole, identify the layers, spec each one independently, make sure they cohere when they stack.

I’ve been building evaluation frameworks for just as long. At SVA, the critique room was the first formwork: multiple perspectives, specific criteria, convergence where they agree, decision points where they disagree. The Formwork Protocol codified that room into a system. Extract the evaluative instinct from practitioners whose judgment you trust. Codify it as testable criteria. Run the criteria against the work. The maker resolves the tensions between lenses. The accumulated resolutions are the work.

I’ve been maintaining fidelity across scale. Twelve years on the Encore platform. Watching a contributor rename a CSS class and six months later the convention is unrecognizable. Building the CONVENTIONS.md, the CLAUDE.md, the voice protocol, the Savepoint Protocol. Each one exists because fidelity at scale is a structural problem, not a willpower problem.

And I’ve been governing quality through constraint. A locked palette that rejects what doesn’t belong. A type system that limits choices so each choice carries more weight. A voice protocol that catches AI writing patterns before they ship. Constraint as creative infrastructure, not creative limitation.

Now apply all of that to AI. The IEP I wrote in a special education classroom is the same document as a CLAUDE.md. Both read the system’s processing realities and structure the environment so the system can succeed at the task. The Savepoint Protocol preserves cognitive turning points the same way a well-written IEP preserves the student’s progress through a learning objective. The anti-slop infrastructure (voice protocol, verification checklist, no-hallucination policy) governs quality the same way a brand system governs visual identity: by making the standard explicit and catching deviation before it ships.

The AI governance conversation is dominated by people who think about the technology (what the model can do) and people who think about the risk (what the model might do). Almost nobody in the room is asking the question that design practitioners ask first: what does the person on the other end actually need from this, and is the system designed to deliver it?

That question produced the entire methodology. Savepoint came from asking what the model needs to find its way back in. Formwork came from asking what the model needs to evaluate one dimension without contaminating another. The skill architecture came from asking what happens when you give a system twelve goals at once versus one at a time. Every piece of governance infrastructure I’ve built started from the same place every design project starts: reading what the system receives, not what the system can do.

Design practitioners have been doing this work for decades. The material is new. The operation is familiar. The people who should be leading AI governance are the people who’ve been governing quality, maintaining fidelity, and designing for the receiver their entire careers.