AI output drifts toward generic without constraint. The default register is competent, clean, and belongs to no one. Every AI-assisted portfolio sounds the same, every blog post follows the same cadence, every project description opens the same way. The model produces what it was trained on: a statistical average of published writing. That average is the drift vector.

The Formwork Protocol prevents this drift the same way it prevents drift in any creative evaluation: by testing output against specific, codified criteria extracted from real practitioners. The lenses are the governance layer. Design history provides the rigor.


Each lens is an evaluation framework extracted from a practitioner’s body of work. Extracting Vignelli means studying what he consistently demanded: restraint, systematic limitation, a small number of typefaces used with absolute discipline. The extraction asks: what questions does this person always ask? What do they never tolerate? What would they see first in a piece of work? Those questions become testable criteria. “Does every element earn its place?” is Rams. “Is the maker visible in the craft?” is Draplin. “Does the typography serve the content?” is Bierut. They sound similar but test differently.

The extraction protocol is four steps. Study the output: books, talks, projects, everything available. Extract the framework: the invisible discipline underneath the visible decisions. Codify as testable criteria: specific, evaluable checks that produce clear verdicts. Validate against known work: run the criteria against work the practitioner produced or praised to confirm the lens catches what it should.


When AI generates output, the lenses run against it. The Vignelli-derived restraint lens tests whether the typography is disciplined or merely present. The Victore-derived identity lens tests whether the maker is visible in the craft or hidden behind a professional template. The Bierut lens tests whether the form is doing intellectual work or just occupying space. The Millman lens tests whether there’s a real human in the text or a performed version of one.

Where lenses agree: the output is solid on that dimension. Where they disagree: there’s a decision to make. The restraint lens scores strong and the identity lens scores weak. The page is disciplined but the person is missing. That tension is the governance in action. It surfaces the exact point where the AI’s default register (competent but generic) conflicts with the maker’s identity (specific and opinionated). The maker resolves it.

Without the lenses, AI governance is a set of abstract instructions: “write in my voice,” “be more specific,” “sound authentic.” Those instructions don’t produce consistent results because they’re not testable. The lenses produce testable results because each one asks a specific question derived from a studied practitioner’s actual standards. “Does this pass Vignelli’s restraint check?” has a clear answer. “Is this authentic enough?” does not.


The lenses are where design history becomes operational in AI governance. The practitioners whose work I studied at SVA and in the years since (Victore, Vignelli, Bierut, Rand, Rams, Lubalin, Muller-Brockmann) become the evaluation infrastructure that keeps AI output from drifting into the average. Their standards, codified and automated, run against every page this site publishes.

Design history isn’t decoration for the methodology. It’s the load-bearing structure. Remove the lenses and you have AI output with no evaluative rigor. The governance collapses to “does this sound OK to me right now?” which is how drift starts.