Fidelity vs. Quality
The thing that looks good but doesn't match what it was supposed to be.
The script fonts for Altrueism were beautiful. Flowing, warm, expressive. The designer who chose them was good at their job. The fonts were well set, well spaced, well paired. By any standard design evaluation, the typography was high quality work.
It was also wrong.
The client’s register was quiet. Understated. The whole point of Altrueism was a kind of careful generosity, not performance. The script fonts performed an emotion the client didn’t have. They announced warmth instead of holding it. A stranger landing on that site would meet a personality that didn’t exist.
This is the gap I keep coming back to. There are two questions you can ask about any piece of work. The first is: is this good? The second is: is this what it was supposed to be?
Those sound like the same question. They are not.
Quality measures the work against craft standards. Is the typography well set? Is the code clean? Does the layout hold at different breakpoints? These are real questions and they matter. But they all point inward, at the work itself. They ask whether the thing is well made.
Fidelity measures the work against intent. It asks whether the thing that got built is the thing that was meant. Whether what survived the process of making still carries what started it. The gap between what was meant and what arrived.
You can have high quality and low fidelity at the same time. The script fonts proved it. Beautiful work, wrong direction. Every craft metric said yes. The only metric that mattered said no.
I watched this happen at a larger scale with Encore over twelve years. The original architecture was clear. The product had a specific shape, a specific logic to how its pieces connected. Then other developers came in. They were good. They made reasonable decisions. Each change, taken individually, was defensible. Well-coded, well-tested, shipped clean.
But each decision was made locally, without holding the original intent. A feature would start with a clear purpose, pass through three teams and two review cycles, and arrive in production stripped of the thing that made it worth building. Over twelve years, the accumulation of individually good decisions produced a product that had drifted from the one that was designed. High quality throughout. Low fidelity to what it was supposed to be.
The same thing happens in a classroom. A lesson plan can check every pedagogical box. Differentiated instruction, formative assessment, scaffolded activities. All good practice. But if it doesn’t match what this specific student actually needs right now, the quality of the plan is irrelevant. The plan is faithful to a method. It’s not faithful to the kid.
This distinction matters because almost every evaluation system I’ve encountered measures quality. Code review checks whether the code is clean. Design review checks whether the design is polished. Editorial review checks whether the writing is clear. These are useful gates. But none of them catch drift. None of them ask whether the thing being reviewed still matches the thing that was intended.
You can pass every check and still be wrong. You can ship clean code that solves the wrong problem. You can deliver a beautiful brand that belongs to someone else. You can build a curriculum that serves the framework instead of the student.
Fidelity is harder to measure than quality because it requires you to hold the original intent long enough to compare against it. Intent degrades. It gets lost in handoffs, diluted in committee review, forgotten across team changes. By the time someone asks “does this match what we meant,” the answer to “what did we mean” is already blurry.
So I hold it. That’s a large part of what I actually do. I keep the thread between what something was supposed to be and what it’s becoming, and I flag the moment those two start to separate. The work is catching drift before it compounds.