AI Navigate

Pair programming with Claude Code: what works and what does not

Dev.to / 3/18/2026

💬 OpinionTools & Practical Usage

Key Points

  • Talking through the approach before writing code improves problem spotting and helps the AI agent align with the developer's plan.
  • Swapping who is 'driving' and who reviews, rather than sticking to a fixed role, better simulates real pair programming dynamics.
  • Explaining the rationale behind changes leads to higher‑quality code because the agent uses the 'why' to guide validation and decisions.
  • Real-time back-and-forth and reading the room are missing with Claude Code, so you need checkpoint moments and explicit uncertainty to keep the process focused.
  • The most productive pattern follows a rhythm: state goal and approach, the agent asks questions, you decide with feedback, the agent implements, and you review; the questioning step is the critical part.

Pair programming with Claude Code: what works and what doesn't

Real pair programming has two people who can both see the code and push back on each other in real time. Working with Claude Code isn't quite that, but it's closer than working alone.

Here's what I've learned after weeks of treating it more like a pairing session than a "write this for me" session.

What works like real pairing

Talking through the approach before writing code. "I'm thinking of handling this with a middleware function that validates the token before the route handler runs. What problems do you see with that?"

The agent won't always see the right problems. But explaining the approach out loud — even to something that just responds — catches more issues than not explaining it.

Swapping who's "driving." Sometimes I write code and ask the agent to review it. Sometimes the agent writes code and I review it. The direction matters less than the review happening.

Explaining why, not just what. "Add input validation here because we're getting user input that goes to the database" produces better code than "add input validation here." The agent uses the "why" to make better decisions about the validation logic.

What doesn't work like real pairing

Real-time back-and-forth. A human pair asks "wait, why are we doing it that way?" mid-implementation. The agent doesn't have that instinct. If you don't build in checkpoint moments, it'll implement whatever it understood without pausing to question it.

Reading the room. A human pair can tell when you're not sure about something by how you say it. The agent can't. If you're uncertain, you have to state it explicitly.

Shared context. A human pair has been in the same conversations, read the same codebase, worked through the same problems. The agent knows what you've told it in this session. Long-running shared context is your responsibility to maintain, not the agent's.

The pattern that gets closest to real pairing

I've found that the most productive pairing-like sessions follow this rhythm:

  1. I explain the goal and the approach I'm considering
  2. Agent raises questions or concerns
  3. I decide, incorporating the feedback
  4. Agent implements what I decided
  5. I review the output with fresh eyes

Step 2 is the critical one. If I skip it ("just implement what I described"), I lose the benefit of the pairing structure. The questions and concerns — even when they miss the mark — make me think more carefully.

From six weeks of running Claude Code on builtbyzac.com.