Talk to It, Don't Type at It
Typing into AI is already filtering your thinking. You're editing as you go. Dictation gives the tools what they actually need: how you think before you've organized it.
When you type a prompt, you’re already editing. You feel the friction of the keyboard or the smallness of the phone screen. You compress a thought to fit the effort of the input. By the time you hit enter, you’ve lost something. The raw version. The version where you changed direction mid-sentence, contradicted yourself, stumbled into the thing you actually meant.
Talk instead.
Dictate into your phone on the drive home. Brainstorm out loud into a voice note at 2 AM when the idea won’t let go. Open a conversation and just speak. Let it be messy. Let it be unfinished. Let yourself say “no wait, that’s not what I mean” and keep going.
Why the mess matters
Every tool I’ve built for working with AI depends on having raw, unstructured human thinking to process. Voice sampling needs unpolished speech to extract how someone actually talks, because published writing is performance and conversation is the real thing. Knowledge traversal needs a body of unfiltered ideation to trace through, because the first time an idea appears, it probably wasn’t called by its final name.
Most prompt advice starts with organizing your thoughts: write clear instructions, define output formats, pre-structure everything before you hit enter. That puts the editing burden on you before you’ve even finished thinking.
I went the other way for three years. Thousands of sessions of thinking out loud: brainstorming, arguing with myself, changing direction mid-sentence. That unstructured corpus turned out to be the most valuable thing I’ve built. The site you’re reading was compiled from it.
The behavior accommodates the human
Talking instead of typing removes the friction between thinking and input. You don’t compress, don’t edit, don’t organize before the thought is finished. The keyboard forces you to compose. Your voice lets you pour.
That pouring is the first half of what I call bidirectional accommodation. The human side. Get the thinking out with as little resistance as possible. The other half (structuring that raw material so the model can process it) is a separate problem, handled by separate tools. But it depends entirely on what goes in. If what goes in is already filtered, already polished, the tools downstream have less to work with. The behavior of talking, of staying raw, is what gives the rest of the system real material.
Starting today
Open your phone’s voice memo app or start a conversation with any AI tool. Talk about what you’re building, what’s frustrating you, what you figured out yesterday. Don’t perform. Don’t organize. Just talk.
Brainstorm into a voice note at midnight. Answer your own questions out loud and let yourself change direction halfway through. One recording is enough to start. A month of them gives the tools a real body of material to process.
The rawness is the point. When you talk instead of type, you stop editing before you’ve finished thinking. That unedited version carries your actual voice, your actual sentence structure, the way you actually move between ideas. It is the thing the tools need.
The full concept is input inversion. The framework that produced it is accommodation design.