Everyone’s talking about what AI can do. Generate images. Write copy. Produce code. Compose music. The capability list gets longer every month. The conversation centers on the jukebox: look how many records it has. Look how fast it can play them. Look how good the sound quality is.

I keep thinking about the DJ.

A jukebox has every record. It can play anything, at any moment, at perfect fidelity. What it cannot do is read the room. It doesn’t know that this crowd needs the energy to drop right now. It doesn’t know that the woman at the bar just heard the song her ex proposed to, and the mood in that corner just shifted. It doesn’t know that the set has been building for forty minutes and the room is ready for the peak, not ready for another warm-up track.

The DJ knows. Not because the DJ has better records. Because the DJ reads the room and selects from the crate based on what the room needs right now. That’s a different skill than having access to all the music. That’s attunement.

AI has every record. It can generate text in any style, produce images in any aesthetic, write code in any framework. The capability is real and it keeps growing. But having every record doesn’t tell you which one to play next. That’s a selection problem, and selection requires reading what the system on the other end actually needs.

I watch people use AI the way you’d use a jukebox. Pick a song. Play it. If you don’t like it, pick another one. Iterate by trial and error until the output seems right. But “seems right” to whom? The person prompting doesn’t always know what the audience needs. They know what they want to hear. Those are different things.

The DJ who plays what the room wants to hear versus what the DJ wants to play is making the same mistake as the designer who builds what looks good to them instead of what the audience needs to receive. Attunement is not preference. It’s reading.

As AI capability scales, the selection problem gets harder, not easier. When the tool can do ten things, picking the right one is a manageable decision. When the tool can do ten thousand things, picking the right one requires a level of judgment that the tool itself cannot provide. More options means more need for someone who can read the context and select.

This is why I keep building governance tools. The voice protocol is a selection tool: out of all the ways the model could write this, which register matches the room? The lens array is a selection tool: out of all the dimensions this work could be evaluated on, which ones matter for this page? SavePoint is a context preservation tool that makes better selection possible: you can’t select well if you can’t remember what you selected last time and why.

The industry narrative says AI will do more and more of the work, and eventually the human steps back. I see the opposite. AI will do more and more of the production, and the human’s role as the reader, the selector, the person who attunes to what the room needs, becomes the most valuable position in the system.

The jukebox gets better every year. The DJ gets more necessary every year. Those two facts don’t compete. They depend on each other.