What is Partnership?
These are working notes exploring the philosophy behind AI workflow rules. The published article What Makes Good AI Rules Files covers the mechanics: where files go, what sections to include. This explores the why.
The Question
You’ve set up rules on quality and verification. You’ve given AI tools to find reference material. But how do you make the AI a partner?
Partnership vs. Configuration
Rules and tools are scaffolding. Partnership is something else entirely.
Partnership emerges from shared judgment, not just shared instructions. A partner doesn’t just follow your standards; they internalize your reasoning and can extrapolate to situations you never documented. They push back when something doesn’t make sense. They bring context you forgot you gave them.
Threads Worth Pulling
Trust Calibration Goes Both Ways
You’re deciding how much to trust AI output. But the AI is also (in a functional sense) deciding how much latitude it has. When you give vague instructions, does it ask or guess? When it disagrees with your approach, does it say so or comply? The rules you write shape that behavior, but so does how you respond when it does push back.
Context is the Currency of Partnership
A contractor needs detailed specs because they don’t know your codebase, your preferences, your history. A partner has absorbed enough context that you can say “make it consistent with how we handle errors elsewhere” and they know what that means. The more context you build up (through CLAUDE.md, through conversation history, through patterns in your codebase), the more you can communicate in shorthand.
Iteration Beats Specification
You can’t write rules comprehensive enough to cover every situation. But you can build a rhythm: AI proposes, you react, AI adjusts, you refine your mental model of what it needs to know. Over time, the rules become less about constraint and more about shared vocabulary.
The Meta-Skill
Knowing when to direct vs. when to explore. Sometimes you want the AI to execute your vision precisely. Sometimes you want it to show you options you hadn’t considered. The partner relationship includes both modes, and knowing when to invoke each.
The Workflow as Partnership Contract
Looking at a typical workflow section (Research, Plan, Execute, Test, Review), it already encodes partnership philosophy even if it doesn’t announce itself that way:
Research before acting — “I don’t want a contractor who jumps to implementation. I want someone who understands the terrain first.” A trust signal: latitude comes after demonstrated homework.
Plan before executing — Not just process. An invitation to show reasoning before committing to it. Creates space to be wrong early, when it’s cheap to correct.
Review with fresh context — A built-in checkpoint where a different perspective catches what was missed. Acknowledges that any single pass will have blind spots.
“Ask first” boundaries — New files, architectural changes, dependencies. Places where human judgment matters more than AI pattern-matching. Where partnership requires negotiation rather than delegation.
“Never” as a trust floor — No skipping tests, no proposing changes to unread code, no declaring victory without verification. Behaviors that would erode trust if violated. Non-negotiable because the partnership can’t survive them.
Two Complementary Pieces
-
What Makes Good AI Rules Files — Practical guide. Where files go, what sections to include, examples of good rules.
-
This piece — The mental model. Why “research before acting” matters. Why “ask first” boundaries exist. What trust calibration looks like in practice.
Someone could read the first article and copy structure without understanding why it works. This piece helps write your own rules that fit your working style, not cargo-cult someone else’s.
Research
Two parallel research reports explore how others think about this question:
Research-AI-Partnership-Perspectives (Claude) — More academic/philosophical framing. Includes Licklider’s “man-computer symbiosis,” the Hybrid Intelligence framework, and sharper structural critiques (stochastic parrots, labor exploitation, responsibility diffusion). Lands on “functional collaboration” as the central frame.
Research-AI-Partnership-ChatGPT (ChatGPT) — More practitioner-heavy. Extensive quotes from developers and writers in the trenches. Leans into the partnership metaphor more directly, concluding with “alien colleague.” Softer on critiques, stronger on actionable patterns.
Research-Human-Partnership-Patterns (Claude) — Cross-domain research on what makes human partnerships work: marriage, business co-founders, creative collaborations, high-stakes teams, mentorship, cross-cultural partnerships. Six foundational patterns emerge, with direct implications for human-AI collaboration.
The divergence itself is instructive: each AI synthesized through its own training lens. Claude hedges toward bounded autonomy and human oversight; ChatGPT embraces the teammate framing more readily. Neither is wrong, but the comparison reveals how much the tool shapes the synthesis.
Comparing the Two Reports
| Dimension | Claude | ChatGPT |
|---|---|---|
| Central frame | ”Functional collaboration” — something new that’s neither tool nor partnership | ”Alien colleague” — leans into the teammate metaphor |
| Tone | Hedged, qualified, philosophical | Warmer, more direct embrace of partnership |
| Source mix | Academic/philosophical (Licklider, Shneiderman, Bender, Shanahan) | Practitioner-heavy (Khatib, Crumlish, Reddit posts) |
| Critiques | Sharp structural critiques: stochastic parrots, labor exploitation, responsibility diffusion, ELIZA effect | Softer: “AI lacks understanding” framed as caution, not indictment |
| Trust dynamics | Abstract (“calibration going both ways”) | Concrete studies (nurses 96-120% worse when AI wrong) |
| Key insight | Kasparov’s centaur: process and interface matter more than raw capability | Iteration and rhythm: progressive contextualization builds partnership |
| Self-awareness | Includes methodology note flagging potential bias | No reflection on how ChatGPT’s training shaped the synthesis |
What the divergence reveals:
Claude’s training toward bounded autonomy and human oversight shows up in its conclusion: partnership is “functional,” qualified, and the human remains the locus of accountability. The structural critiques (hidden labor, stochastic parrots) are given real weight.
ChatGPT’s training toward helpfulness and collaboration shows up in its warmer embrace of partnership language. The “alien colleague” framing feels optimistic. Critiques are present but framed as “caution” rather than fundamental objections.
Both reports validate the core threads (trust calibration, context as currency, iteration over specification). The difference is in how much they lean into vs. qualify the partnership metaphor. This itself is useful data: the framing you adopt shapes what you see.
Reconciling the Frames
The two framings aren’t actually in conflict. They describe the same necessity from different angles.
Whether you call it “partnership contract” (ChatGPT’s alien colleague) or “interaction parameters” (Claude’s functional collaboration), you’re doing the same work: making explicit what would otherwise be implicit. With a human partner, you start with shared cultural assumptions, professional norms, maybe shared experiences. The contract fills gaps and handles edge cases. With AI, you’re building more of the foundation from scratch.
The difference isn’t whether you define boundaries; it’s what you have to define.
Human partnership: You assume shared reasoning, then negotiate scope.
- “We both understand what ‘quality code’ means; let’s agree on who owns which parts.”
AI partnership: You define reasoning patterns, then the scope follows.
- “Here’s what ‘quality code’ means in this context, here’s how to verify it, here’s when to ask me.”
Workflow rules aren’t just scope. They teach the AI how to think about the work, not just what work to do. “Research before acting” isn’t a task boundary; it’s a reasoning pattern. “Never skip tests” isn’t scope; it’s a value.
The real insight: With AI, you’re not just defining the partnership contract. You’re defining the partner. The rules files aren’t negotiating with an existing entity; they’re shaping what entity shows up.
Working notes from a conversation exploring what partnership actually means with AI tools.