Agent psychometrics: Task-level performance prediction in agentic coding benchmarks

arXiv cs.AI / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as LLM coding moves toward agentic, multi-step tool-using interactions, aggregate benchmark pass rates are no longer sufficient to explain which specific tasks will be hard for agents.
  • It proposes a task-level performance prediction framework by extending Item Response Theory (IRT) with features derived from issue statements, repository context, candidate solutions, and test cases.
  • The method decomposes overall agent ability into two components—LLM ability and scaffold ability—allowing more granular modeling of why an agent succeeds or fails.
  • By parameterizing this way, the framework can aggregate results across heterogeneous leaderboards and predict performance on unseen benchmarks and unseen LLM–scaffold pairings.
  • The authors claim practical value for benchmark designers, enabling difficulty calibration of new tasks with less reliance on computationally expensive agent evaluations.

Abstract

As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.