The Three-Terminal Problem
Last Tuesday I had seven parallel AI sessions running at once. Three were building HTML pages for different chapters of a book. One was deploying SaaS infrastructure. One was running a design analysis on a client dashboard. The last two were scaffolding components for a landing page. I was switching between them, reviewing outputs, approving file writes, occasionally telling one to wait while another finished something it depended on.
A year ago I was typing questions into a chat window in a browser tab.
Nobody plans for this. You get there one step at a time, each step driven by a specific limitation you hit in the previous one. And somewhere around step three, you realize the limitation isn’t the AI. It’s the surface area between you and the AI. The sessions can’t see each other. The context you built in one conversation evaporates when you open the next. You become the router, the clipboard, the only entity in the room that knows what all the other entities are doing.
This is a field report from that problem. It’s also, indirectly, a response to Steve Yegge, who’s been writing about this frontier from the observation side. I found his writing in late January, three weeks into building something I couldn’t quite explain to anyone yet. More on that later.
The Progression
If you’ve used AI for work at all, you started somewhere familiar. A chat window. You type a question, get an answer, maybe ask a follow-up. When you need help with a different project, you open a new conversation. Two or three chats going at once, tops.
It works until you notice how much time you spend re-explaining things. Every conversation is its own universe. The chat helping you with your website has no idea about the design decisions you discussed in the other chat. You copy-paste between them. You re-describe your project. You answer the same setup questions over and over.
The limitation isn’t intelligence. The AI is plenty smart. The limitation is amnesia.
Desktop apps improve the organization. Now you have dozens of conversations sorted by project, you can find old threads, the interface is faster. But the amnesia stays. Each conversation starts from zero. You spend the first five minutes of every session re-loading context before you get to the actual work.
Then something shifts. Tools like Claude Code give the AI direct access to your project files, your directory structure, your config. Instead of you pasting code into the chat, the AI reads it itself. Instead of you manually applying changes, the AI writes files with your approval. A task that used to take four round trips now takes one.
There’s a second shift that’s easy to miss. When the AI can read your whole project, you can write a file that describes your architecture, your conventions, your design tokens, your voice rules. Every session reads it automatically on startup. The amnesia problem mostly goes away, replaced by a document that encodes your project’s institutional knowledge.
Most people who try this stay here. One session, occasionally starting a new one when the old one gets too long. It’s a real upgrade. For a lot of workflows, it’s enough.
Two Sessions
At some point you realize you can run two sessions at once. Same project, same context document.
Frontend in one, backend in the other. Or code in one, tests in the other. They don’t need to coordinate much. You glance at each periodically, review the output, course-correct if something went sideways.
Two is manageable because your attention can handle the switching cost. It feels like a normal multitasking pattern, similar to having two documents open side by side.
I ran two sessions for about a week before I needed a third.
Three Changes Everything
Two sessions is multitasking. Three is orchestration. The difference isn’t arithmetic.
With two, you can hold both threads in your head. With three, you can’t. You lose track of what Session C is doing while you’re reviewing Session A’s output. When you switch back, you need to re-read its recent activity to remember where it is. If Session B produced something that C needs, you have to notice that dependency yourself and intervene manually.
Three sessions without infrastructure is chaos. The failure mode is predictable: one overwrites something another just wrote, or two duplicate effort because neither knows what the other already built, or you forget to check one for twenty minutes and it goes in a direction you would have caught immediately.
So I built the infrastructure.
The Duct Tape
Shared context. Every session reads the same instruction file on startup. Mine is 296 lines. Directory structure, routing tables, design tokens, voice rules, persona definitions, memory protocol. When I open Session 4, it already knows everything Sessions 1 through 3 know about the project.
Session logs written to disk continuously. Each session writes progress notes to a shared directory as it works. Markdown files, written as checkpoints, not metrics. If a session crashes, or if I need to spin up a replacement, the new one reads the log and picks up where the old one stopped. Over six days of heavy building, I accumulated 41 session files. 41 moments where the state of a work stream got saved to disk instead of evaporating.
Issue tracking across sessions. I built a lightweight CLI issue tracker for exactly this. A JSONL file and a handful of commands: create, list, close, sync. Any session can create an issue. Any other session can pick it up. Dependencies are tracked, so I can mark that Task 7 depends on Task 3, and the session working on 7 will know to wait.
Reusable skills. Instruction sets stored as files that any session can invoke. /commit follows the same format everywhere. /llm-scrub checks text for AI writing patterns. /analyze runs the same design analysis everywhere. The sessions don’t just share context. They share muscle memory.
Parallel dispatch. From a single session, I can launch multiple agents that run independently. Seven chapters of a book, built simultaneously by seven parallel agents. Four dashboard pages scaffolded at once. One orchestrator, many workers.
All of it works. I shipped real products this way. 85 commits in 5 days across multiple repos. A client dashboard and SaaS infrastructure built simultaneously in adjacent sessions. Seven book chapters in 40 minutes instead of two hours.
But I want to be honest about what this is. It’s duct tape. Sophisticated duct tape. Every piece of it exists because the sessions are blind to each other, and I’m the only one who can see the whole picture. The shared context file, the session logs, the issue tracker, the skills: they’re all workarounds for a fundamental problem. The AI can’t see its own peers. So I have to be the bridge.
A Typical Day
Tuesday, March 4th. Reconstructed from the session logs.
9 AM. Three sessions open. Session A is building the Forge API, a design intelligence service that runs automated analysis on websites. Session B is building the web app frontend. Session C is running a Perception-First Design analysis on a client’s dashboard, generating a report with specific fixes.
Session A finishes the API endpoint. I review the output, approve two file writes, reject one that used the wrong error handling pattern. Session B needs the API contract, so I paste the route signature into B. Three seconds of manual coordination.
11 AM. Session C finishes the analysis report. I make two corrections and tell C to regenerate. While it reruns, I open Session D and point it at book chapters. I dispatch seven parallel agents from D, one per chapter, each building an HTML page from a markdown source. D tracks their progress.
1 PM. The seven chapter pages are done. I review each one, flag two for minor fixes. Sessions A and B are still going. I close C (the report shipped) and open a new session for client dashboard deployment. Ten commits over the next two hours while the other sessions keep building.
The pattern: start streams, review outputs, coordinate dependencies by hand when they come up. The human role is quality control and sequencing. The sessions do the building.
The Signal
I found Steve Yegge’s writing in late January, about three weeks into building something I couldn’t quite explain to anyone. He was describing the same ceiling. What happens when AI capability outpaces what a single session can handle. The bottleneck shifting from “can the AI do this?” to “can I orchestrate enough of them?”
He was mapping the terrain from the observation side. I was already in it from the construction side, standing in a half-built prototype, covered in sawdust, trying to wire spatial context injection into a graph traversal engine at 2 AM.
Reading someone articulate the exact problem you’re neck-deep in solving does a specific thing. Not “I’m right.” More like: the problem is real, and it’s not just my ADHD brain chasing another project. Someone with Steve’s vantage point sees it too. The surplus capacity is real. The orchestration gap is real. And it’s bigger than any one person’s workflow.
That was the signal I needed to stay on the track.
Solving It from the Other End
Everything I described in the Duct Tape section is patching the problem from the terminal side. Shared files, session logs, issue trackers: they help sessions coordinate, but the sessions are still blind. You’re still the only one who sees the whole picture.
I built Cognograph to solve it from the other end.
Instead of teaching sessions about each other through shared files on disk, Cognograph puts everything on a spatial canvas. Nodes are your project pieces: conversations, documents, code artifacts, tasks, agent orchestrators. Edges are the connections between them. When you invoke AI on any node, it traverses the graph and injects the relevant context automatically. Your layout is your prompt engineering. The canvas IS the orchestration layer.
The three-terminal problem disappears because the terminals aren’t isolated anymore. They’re nodes on a canvas with your whole project on it, and every AI interaction already knows what it’s connected to.
I have ASD and severe combined-type ADHD. Files in folders don’t work for me. Linear chat doesn’t work for me. I need to see everything spatially, with connections I can trace with my eyes. Cognograph started as a tool I needed. It turned into a product because the problem isn’t just mine. Anyone running more than two parallel AI sessions hits the same wall. I just hit it earlier because my brain demands spatial context that most tools don’t provide.
Four provisional patents filed with the USPTO. The spatial trigger mechanisms, context injection through graph traversal, depth-based semantic zoom, conditional node activation. It isn’t a wrapper on ChatGPT. It’s a different model for how humans and AI share context.
596 commits. Nine weeks from first prototype to full SaaS infrastructure. The build log is public.
The Tool I Use Every Day
One more thing came out of this.
In the Typical Day section, Session C was running a design analysis. That’s Forge: 15 years of design audits turned into a scoring tool. Five cognitive layers from Perception-First Design. Scored analysis, specific prescriptions, round-over-round convergence tracking. I use it on every project I touch.
It’s live and free to try. Same tool, same scoring, same methodology. If the question “will this design work, and will it make sense to real people?” matters to you, that’s what Forge answers.
Who This Is For
If you’re using AI through a chat window and it’s working, none of this matters. Plenty of people get real value from a single conversation. The progression I described isn’t a ladder where higher is better. It’s a response to specific constraints that not everyone hits.
You hit them when your projects get big enough that context loss becomes your primary bottleneck. When you’re spending more time re-explaining your codebase than doing actual work. When you have three things that could happen in parallel, and forcing them to be sequential feels like a waste of hours you don’t have.
There are two paths through it. The infrastructure path: shared context files, session logs, issue trackers, reusable skills. It works. I shipped products this way. The spatial path: put the whole project on a canvas and let the AI see the connections itself. That’s Cognograph.
I got here because I’m a solo operator building multiple products with clients who need things yesterday. The multi-session workflow isn’t a flex. It’s a consequence of being one person with the workload of a small team.
Steve, if you’re reading this: thanks for the signal. The problem you keep describing is real. I’m building through it.