Attunement: Seeing What the System Needs
The first question is always: what does this system need from me?
The first thing I ask about any system, whether it’s a student, an AI tool, or an engineering team: what does this system actually need from me? Not what do I need from it. What does it need from me.
That question flips the architecture. You stop building what you want and start building what the system can use.
In the classroom, attunement was the skill underneath every other skill. Before I could decompose a task, I had to understand how this specific student processed information. Before I could scaffold an assignment, I had to understand where this student’s independent capability ended and where they needed support. Before I could evaluate progress, I had to understand what progress looked like for this student, not for a generic fourth grader.
Attunement isn’t empathy. Empathy is feeling what someone else feels. Attunement is seeing what someone else needs. A teacher who empathizes with a struggling student might lower expectations. A teacher who’s attuned to the same student adjusts the input format, changes the scaffolding, modifies the assessment criteria. The bar stays the same. The path to it changes.
With AI tools, attunement means understanding the system’s actual capabilities and limitations before you start giving it work. A context window is a working memory ceiling. A token limit is a processing constraint. A tendency to hallucinate is a reliability boundary. These aren’t bugs. They’re the system’s processing profile. You accommodate them the same way you accommodate a student’s processing profile.
The voice protocol is an attunement tool. It exists because I studied how AI output drifts from human voice and built constraints that prevent the specific failure modes I observed. I didn’t start with a theory about what AI should do. I started by watching what it actually did when left unconstrained. Em dashes everywhere. Negation-affirmation in every paragraph. Epigrammatic closers on every section. The observation came first. The governance came second.
Input inversion is the formal name for this. Instead of asking what output you want from the system, you ask what input the system needs to produce the right output. The student doesn’t need a better test. They need the instructions decomposed. The AI tool doesn’t need a better prompt. It needs the task structured so its processing constraints don’t corrupt the result.
Attunement is the hardest skill to teach because it requires you to stop thinking about yourself. What you want, what you’re building, what your deadline is. All of that drops away. For a moment you’re just watching. What does this system do when I give it this input? What changes when I give it different input? Where does it break? Where does it surprise me?
The answer to those questions is the beginning of every useful structure I’ve ever built.