Read ten AI-assisted “About” pages and you’ll notice they sound identical. The same cadence, the same transitions, the same way of building to a point. Different words, same voice. The person disappears and the tool’s default register takes over.

This is not a prompting problem. You can write detailed style guides, paste voice examples, specify tone and vocabulary. The output will be better than the default, but it still won’t sound like the person. It will sound like an AI doing an impression of a style guide.

The reason is structural, and once I understood it, I could build a solution.

Why it happens

Large language models learn to write from published text. Blog posts, articles, marketing copy, documentation, books. Published writing is a performance. The way someone writes a LinkedIn post is not how they think. The way someone writes a case study is not how they explain the same project to a friend. Published text has been edited, polished, shaped for an audience. The rough edges, the false starts, the way someone actually arrives at an idea: all of that is gone before publication.

So when you ask an AI to write “in your voice” and give it examples of your published writing, you’re giving it your performance, not your voice. The AI learns to imitate the performance, and since most people’s published performances converge toward similar conventions (clean transitions, parallel structure, building to a thesis), all AI-assisted writing converges toward the same register.

The real voice is somewhere else. It’s in the conversations. The working messages. The unguarded explanations where someone is thinking out loud rather than presenting a finished thought.

The real voice is somewhere else. It’s in the conversations.

1,643 conversations

Between January 2023 and early 2026, I had 1,643 conversations with ChatGPT. Then I moved to Claude Code and accumulated 43+ session transcripts. Add in Claude.ai conversations, Gemini sessions, and the working notes that accumulated across all of them. Indexed together, that’s over 52,000 documents.

Most of that material is me talking. Explaining problems, working through decisions, arguing with myself about naming, reacting to what the tool produced, directing implementation. The way I start sentences, the vocabulary I reach for when I’m not performing, how I describe problems (feeling first or situation first), how I transition between ideas (connectors or hard breaks), what makes me funny and what makes me frustrated.

That conversational material is the actual voice. It’s the raw signal that published writing filters out.

The pipeline

I built three skills that work together.

Voice sampling

The voice-sample skill reads my conversation transcripts and captures observations about how I actually talk. It works as a tuning fork, not a style guide. It tracks sentence rhythm (short bursts or long flowing thoughts?), opening moves (“so basically…”, “the thing is…”, “yeah let’s…”), natural vocabulary, transitions, humor, and qualifying phrases. Each session that does copy work adds observations to a persistent file. Over time it builds a detailed fingerprint.

The key constraint: voice sampling reads conversations, not published pages. The published pages are the output. The conversations are the source. Those are different things, and conflating them is how you get AI voice that sounds polished but empty.

Governance constraint

The voice sample feeds into the copywriting protocol as a hard constraint. When the system writes anything, the protocol specifies: vary sentence length (short declaratives mixed with longer causal chains), use material vocabulary (holds, breaks, drifts, scaffold, fidelity, load, joints, terrain, coherence, lock), open with a real moment rather than a concept, show action and consequence rather than description and explanation.

Those aren’t vague guidelines. They’re enforceable rules. “Zero em dashes” is a rule. “No fortune-cookie closers” (sentences like “The structure is the signal”) is a rule. “Every negation-affirmation pattern has to be earned” (“Not X. Y.”) is a rule, because that’s the single most common AI writing pattern and most people don’t notice it until you point it out. The test: does the negation correct a genuine misunderstanding the reader would actually have? If you’re using it for emphasis or contrast, rewrite it.

The rules came from studying what AI writing consistently gets wrong. Em dash overuse, triple-beat negation (“Not the X. Not the Y. The Z.”), ungrounded metaphors, academic register where spoken English works, staccato short sentences stacked for false drama, self-aware closers that announce what the work “proves.” Each rule exists because I caught the pattern in AI output and codified the prohibition.

The 12-item checklist

Before anything publishes, a copy-verify skill runs a 12-item pass/fail check:

  1. Does the opener make a stranger feel the problem, not just know about it?
  2. Does it pass the Grip Test at Grip or Lock level? (A separate diagnostic: can a stranger who hasn’t lived through this problem still feel why it matters?)
  3. Is every specific detail traceable to a verified source?
  4. Zero em dashes?
  5. Zero banned words? (paradigm, leverage, passionate, innovative, synergy, empower, journey, transformative)
  6. No fortune-cookie closer?
  7. Every “Not X. Y.” pattern earned? (Must correct a genuine misunderstanding, not just add emphasis.)
  8. No ungrounded metaphors?
  9. No personification of tools or methods? (“does the thinking”, “holds itself”)
  10. Does the copy feel like a room, not a pitch? Would it feel wrong on someone else’s site?
  11. Does the frontmatter match the body?
  12. Does the index entry need updating?

Items 1 and 2 check whether the writing lands with a reader. Items 3 through 9 catch specific AI writing patterns. Item 10 checks identity coherence. Items 11 and 12 catch structural drift.

The checklist is mechanical. It doesn’t require taste or judgment. It requires checking specific, verifiable conditions. That’s by design: governance works when the checks are concrete enough that you can’t rationalize your way past them.

The Grip Test

One skill deserves its own explanation because it addresses the hardest problem in writing about your own work: you already understand it, so you can’t tell whether a stranger would.

I named it after my friend Ben. I showed him an early draft of the Savepoint Protocol page and asked what he thought. He said he could get “a fingernail hold” on it. He could tell it was something, but he couldn’t feel why it mattered. That’s the standard.

The Grip Test has three ratings:

Fingernail: The reader can see it’s something but can’t feel why it matters. They’d leave the page without a clear reason to care.

Grip: The reader feels the problem even if they haven’t lived it. They understand the stakes through the writing alone.

Lock: The reader recognizes their own experience in what they’re reading. The writing connects to something they already know but maybe haven’t articulated.

Every page should land at Grip or Lock for a stranger. If a page is at Fingernail, the opening isn’t working. The test also identifies the “in”: what shared human experience does this connect to? Losing something you can’t get back? Building something nobody asked for? The gap between what you know and what you can prove?

What this solves

The pipeline doesn’t produce my voice automatically. Nothing produces voice automatically. What it does is constrain the space of possible output so that the things AI writing typically gets wrong are caught before they ship. The voice sample provides the tuning fork. The governance rules eliminate the most common failure modes. The checklist catches what slips through.

The result is that every published page on my site has been through this pipeline, and the pages sound like someone specific. Someone with real opinions, real history, and a specific way of getting to the point.

I don’t think voice governance is optional for anyone using AI to write. If you’re not governing voice, you’re publishing in the tool’s default register and calling it yours. The pipeline I built is specific to me, but the approach transfers: find your real voice in your unguarded conversations, extract the patterns, codify the constraints, and verify mechanically before publishing.