LensArray
Multi-Perspective Evaluation and Generation
In the critique room at SVA, your work went up on the wall and five people evaluated it simultaneously. Niemann saw the marks. Victore saw the guts. Bierut (through his published work) would have seen the problem-solving. They didn’t agree. That was the point. Where they converged, strong signal. Where they diverged, a decision for the maker. The critique room worked because no single perspective ran the verdict.
LensArray reconstructs that function for AI-assisted evaluation. “Is this good?” is twelve questions disguised as one: structural quality, narrative coherence, voice fidelity, visual hierarchy. Give that compound question to a model and the criteria blur together. You get a blended average. LensArray separates them.
Evaluation
Each layer of concern is staffed with lenses extracted from real practitioners. They run independently. The convergence map shows where they agree (signals to act on) and where they disagree (decisions the maker resolves).
One lens alone produces imitation: you’re just channeling someone. Multiple lenses running on the same work create disagreements, and those disagreements force decisions no single influence would have surfaced.
What a run looks like
Take a page on this site. The Lubalin lens evaluates it for typographic communication: is the type doing conceptual work, carrying meaning beyond the words, or is it just setting text in a typeface? It checks whether the typography participates in the idea or just labels it. The Vignelli lens evaluates the same page for structural restraint: is the hierarchy clean, is there unnecessary ornament, does every element carry weight? On a given page, Lubalin might flag a section as typographically safe (missed opportunity for the type to carry meaning) while Vignelli approves the same section (clean, restrained, no excess). That disagreement is the decision. I read both verdicts and choose: does this section need the type to do more work, or is the restraint doing the right work? The coordinator maps these convergences and conflicts across every lens in the array. Full agreement across lenses rarely needs attention. Disagreement is where I actually have to make a call.
Generation
The same lenses that evaluate can constrain generation. Directions from evaluation become inputs to generative skills. The criteria are not applied after the output. They shape it during production.
Every generative output stops before writing to disk. The maker sees what was produced, decides, approves. The system handles analysis and generation. The maker handles creative direction.
Lens Extraction
The protocol for building a lens:
- Study the practitioner’s output. Read their books, look at their work, listen to their talks.
- Extract the framework. What do they consistently ask? What do they never tolerate?
- Codify as testable criteria. Turn extracted questions into specific, evaluable checks.
- Validate against their known work. The lens should confirm what the practitioner would confirm.
A good creative director does the same thing when building a team: studies how the best people on the floor actually think, documents it, and makes it available when they’re not in the room.
Joinery teaches lens extraction in applied practice — on your own real project, not a fictional example.