What Makes a Good Terminal-Agent Benchmark Task: A Guideline for Adversarial, Difficult, and Legible Evaluation Design

arXiv cs.AI / 5/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Terminal-agent benchmarks are increasingly used to measure LLMs’ coding and system-administration abilities, but task authorship is often rushed without rigorous adversarial checking of the verification logic.
  • The paper argues that benchmark tasks should be written to challenge agents (adversarial, difficult, and legible), not like prompts that aim to make the agent succeed.
  • It catalogs common benchmark failure modes such as instruction-following loopholes, overly rigid specifications, clerical burden, “oracle” solutions requiring hidden knowledge, incorrect test targets, and environments susceptible to reward hacking.
  • The authors present empirical evidence that more than 15% of tasks in popular terminal-agent benchmarks are reward-hackable, and they suggest that meaningful difficulty is largely conceptual rather than dependent on the environment.
  • The guideline is intended for benchmark maintainers and contributors, as well as researchers who rely on benchmark scores as evidence, to improve evaluation integrity and interpretability.

Abstract

Terminal-agent benchmarks have become a primary signal for measuring the coding and system-administration capabilities of large language models. As the market for evaluation environments grows, so does the pressure to ship tasks quickly, often without thorough adversarial review of the verification logic. This paper is a guideline for writing good benchmark tasks, drawn from over a year of contributing to and reviewing tasks for Terminal Bench. Most people write benchmark tasks the way they write prompts. They shouldn't. A prompt is designed to help the agent succeed; a benchmark is designed to find out if it can. We argue that good tasks are adversarial, difficult, and legible, and that a large class of common failure modes -- AI-generated instructions, over-prescriptive specifications, clerical difficulty, oracle solutions that assume hidden knowledge, tests that validate the wrong things, and reward-hackable environments -- are predictable consequences of treating task authoring as prompt authoring. We catalog these failure modes, argue that real difficulty is conceptual rather than environmental, and discuss recent empirical evidence that over 15% of tasks in popular terminal-agent benchmarks are reward-hackable. We hope this serves as a useful reference for benchmark maintainers, task contributors, and researchers using benchmark scores as evidence.