I watched someone demo an AI tool that could generate a complete brand identity in ninety seconds. Logo, color palette, typography system, brand guidelines, social media templates. The audience was impressed. The output was competent. Every element was defensible in isolation. And the whole thing was completely wrong for the fictional client they’d described in the prompt.

The tool had done everything it was asked to do. The problem was that nobody had asked the right questions.

This is where the discourse breaks down. The conversation about AI is almost entirely about capability. What can it generate. How fast. How cheap. How close to human-quality. Every week there’s a new benchmark, a new model, a new demo showing output that would have taken a team of people days to produce. The capability curve is real. It’s accelerating. I use these tools every day and they are genuinely good at producing things.

But producing things was never the hard part.

I keep coming back to the jukebox analogy because it’s the cleanest way I know to describe the gap. A jukebox has every record. Press a button, any song plays. The catalog is functionally infinite. And a room full of people with access to that jukebox will have a terrible night, because having every record and knowing which record this room needs right now are two completely different skills.

A DJ has the same catalog. The difference is attunement. The DJ reads the room. Feels the energy. Notices that the last two tracks built tension and the floor needs a release. Notices that the crowd shifted when that one synth line came in and files that away. Makes a selection based on what’s happening right now, in this room, with these people, at this point in the night. The selection isn’t the best track in the catalog. It’s the right track for this moment.

AI is the jukebox. It’s an extraordinary jukebox, getting better every month. It can play any record you ask for, and increasingly it can play records you didn’t know existed. But it cannot feel what the room needs. It can analyze the room. It can measure the room. It can make probabilistic guesses about the room based on training data from other rooms. And those guesses will be competent, defensible, and frequently wrong in ways that are hard to detect from inside the system.

The brand identity demo was a perfect example. The AI produced a system that followed solid design principles. Good contrast. Readable hierarchy. Consistent application. But the fictional client was a small-batch ceramics studio run by a woman in her sixties who’d been making pottery for forty years. The brand needed to feel like hands and clay and decades of practice. What it got was a clean, contemporary system that could have belonged to any of a hundred different businesses. Competent. Generic. No attunement.

The person who could have fixed that, who could have read the brief and felt the gap between what the tool produced and what the client actually needed, wasn’t in the loop. Or more precisely, the loop didn’t have a place for that person. The demo was about speed. Input a prompt, output a brand. The human role was to press the button and approve the result.

This is the inversion I keep running into. As AI gets more capable, the human skill that matters most is the one AI doesn’t have: knowing what this situation needs right now. Selection. Judgment. Attunement. The ability to stand between a powerful tool and a specific context and make the call that the tool can’t make for itself.

And that skill is getting less attention, not more. The discourse treats human judgment as the bottleneck that AI will eventually route around. The pitch is always: soon you won’t need the expert. Soon the tool will be good enough. Soon the capability gap will close.

The capability gap will close. I believe that. The attunement gap won’t. Because attunement isn’t a capability problem. It’s a presence problem. You have to be in the room. You have to care about the specific person or project or situation in front of you. You have to have accumulated enough experience with similar situations to recognize the subtle signals that distinguish this one from the general case. None of that scales. None of that trains into a model.

What I’m interested in is how to build systems that make room for that skill instead of routing around it. Protocols that position the human where their judgment matters most and let the AI handle the parts where capability is the constraint. The AI generates options. The human reads the room and selects. The AI executes at scale. The human catches the drift before it compounds.

The tools improve every day. And every improvement makes the human who can read the room more necessary, not less. I don’t think most people see that yet, because the demos are so impressive that it’s easy to forget the demo isn’t the deployment. The demo shows capability. The deployment requires judgment.