Agent psychometrics: Task-level performance prediction in agentic coding benchmarks
arXiv cs.AI / 4/2/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that as LLM coding moves toward agentic, multi-step tool-using interactions, aggregate benchmark pass rates are no longer sufficient to explain which specific tasks will be hard for agents.
- It proposes a task-level performance prediction framework by extending Item Response Theory (IRT) with features derived from issue statements, repository context, candidate solutions, and test cases.
- The method decomposes overall agent ability into two components—LLM ability and scaffold ability—allowing more granular modeling of why an agent succeeds or fails.
- By parameterizing this way, the framework can aggregate results across heterogeneous leaderboards and predict performance on unseen benchmarks and unseen LLM–scaffold pairings.
- The authors claim practical value for benchmark designers, enabling difficulty calibration of new tasks with less reliance on computationally expensive agent evaluations.
Related Articles

Black Hat Asia
AI Business

I Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Один промпт заменил мне 3 часа работы с текстами в день
Dev.to

Building an AI that analyzes stocks like Warren Buffett
Dev.to