Forced Parallelization vs Human Cognition

The Mismatch

Humans don’t think in parallel. We context-switch poorly. But AI makes parallel work trivial—10 agents, 10 tasks, all running simultaneously. The pressure becomes “why aren’t you running more in parallel?” even though managing 10 concurrent threads is cognitively expensive in a way “writing more code” never was.

Source Context

  • AI Daily Brief: “How I Built My 10-Agent OpenClaw Team”

    • 10 agents working simultaneously
    • Agents communicate, delegate, review each other’s work
    • Maintained shared context
    • Operated like a real team
  • How I AI: “Building Custom Dev Tools and Model-vs-Model Reviews” (CJ Hess)

    • Flowy: visual planning tool to guide Claude Code
    • Model-vs-model QA: different models reviewing each other’s output

The Core Problem

We’re being driven to parallelize in our work when that’s a totally new way of thinking. It’s just not how humans think.

Open Questions

On parallelization:

  • Is the skill becoming “orchestration” rather than “implementation”? (You don’t write the code, you conduct the agents writing the code.)
  • What’s the cognitive load ceiling? Can a human effectively manage 10 agents, or is there a hard limit before it collapses into chaos?
  • How do you maintain mental models of 10 concurrent work streams when human working memory tops out around 7 items?
  • Does effective parallel work require fundamentally different cognitive strategies than serial work?

On human limits:

  • Humans used to be the bottleneck, so serial work was fine. Now humans aren’t the bottleneck for execution, but are we still the bottleneck for judgment?
  • You can run 10 agents, but can you actually steer 10 agents? Or do you just have 10 things happening that you’re not really piloting?
  • What’s the difference between “managing 10 parallel tasks” and “losing track of 10 things at once”?
  • If orchestration is the new skill, what does good orchestration actually look like?

Comments