When you type a prompt, you’re already editing. You feel the friction of the keyboard or the smallness of the phone screen. You compress a thought to fit the effort of the input. By the time you hit enter, you’ve lost something. The raw version. The version where you changed direction mid-sentence, contradicted yourself, stumbled into the thing you actually meant.

Talk instead.

Dictate into your phone on the drive home. Brainstorm out loud into a voice note at 2 AM when the idea won’t let go. Open a conversation and just speak. It doesn’t need to be clean. Say “no wait, that’s not what I mean” and keep going.

Why the mess matters

Every tool I’ve built for working with AI depends on having raw, unstructured human thinking to process. Voice sampling needs unpolished speech to figure out how someone actually talks, because published writing is already a performance. Knowledge traversal needs a body of unfiltered ideation to trace through, because the first time an idea shows up in your conversation history, you probably called it something else.

Most prompt advice starts with organizing your thoughts: write clear instructions, define output formats, pre-structure everything before you hit enter. That puts the editing burden on you before you’ve even finished thinking, which is exactly backwards.

I went the other way for three years. Thousands of sessions of thinking out loud: brainstorming, arguing with myself, changing direction mid-sentence. That unstructured corpus turned out to be the most valuable thing I’ve built. The site you’re reading was compiled from it.

The behavior accommodates the human

Talking instead of typing removes the friction between thinking and input. You don’t compress, don’t edit, don’t organize before the thought is finished. The keyboard forces you to compose. Your voice lets you dump.

That dumping is the first half of what I call bidirectional accommodation. The human side. Get the thinking out with as little resistance as possible. The other half (structuring that raw material so the model can process it) is a separate problem, handled by separate tools. But it depends entirely on what goes in. If what goes in is already filtered, already polished, the tools downstream have less to work with.

Starting today

Open your phone’s voice memo app or start a conversation with any AI tool. Talk about what you’re building, what’s frustrating you, what you figured out yesterday. Don’t organize it. Just talk.

Brainstorm into a voice note at midnight. Answer your own questions out loud and let yourself change direction halfway through. One recording is enough to start. A month of them gives the tools a real body of material to process.

When you talk instead of type, you stop editing before you’ve finished thinking. That unedited version carries your actual voice, your actual sentence structure, the way you actually move between ideas. The tools downstream need that material more than they need clean prompts.

The full concept is input inversion. The framework that produced it is accommodation design.