Scaling Test-Time Compute for Agentic Coding

arXiv cs.LG / 4/22/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper targets test-time scaling for agentic coding, where long-horizon attempts create extended action/observation trajectories that are hard to directly compare or reuse.
  • It proposes converting each rollout into a compact, structured summary that keeps key hypotheses, progress, and failure modes while discarding low-signal trace details.
  • It introduces Recursive Tournament Voting (RTV) for parallel scaling by repeatedly narrowing a population of rollout summaries via small-group comparisons.
  • It adapts Parallel-Distill-Refine (PDR) for sequential scaling by conditioning new rollouts on distilled summaries from earlier attempts.
  • Experiments show consistent gains for frontier coding agents on SWE-Bench Verified and Terminal-Bench v2.0, including Claude-4.5-Opus improving from 70.9% to 77.6% and from 46.9% to 59.1%, respectively.

Abstract

Test-time scaling has become a powerful way to improve large language models. However, existing methods are best suited to short, bounded outputs that can be directly compared, ranked or refined. Long-horizon coding agents violate this premise: each attempt produces an extended trajectory of actions, observations, errors, and partial progress taken by the agent. In this setting, the main challenge is no longer generating more attempts, but representing prior experience in a form that can be effectively selected from and reused. We propose a test-time scaling framework for agentic coding based on compact representations of rollout trajectories. Our framework converts each rollout into a structured summary that preserves its salient hypotheses, progress, and failure modes while discarding low-signal trace details. This representation enables two complementary forms of inference-time scaling. For parallel scaling, we introduce Recursive Tournament Voting (RTV), which recursively narrows a population of rollout summaries through small-group comparisons. For sequential scaling, we adapt Parallel-Distill-Refine (PDR) to the agentic setting by conditioning new rollouts on summaries distilled from prior attempts. Our method consistently improves the performance of frontier coding agents across SWE-Bench Verified and Terminal-Bench v2.0. For example, by using our method Claude-4.5-Opus improves from 70.9% to 77.6% on SWE-Bench Verified (mini-SWE-agent) and 46.9% to 59.1% on Terminal-Bench v2.0 (Terminus 1). Our results suggest that test-time scaling for long-horizon agents is fundamentally a problem of representation, selection, and reuse.