Benchmarks for Trajectory Safety Evaluation and Diagnosis in OpenClaw and Codex: ATBench-Claw and ATBench-CodeX

arXiv cs.AI / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • ATBench is presented as a benchmark for evaluating and diagnosing agent safety at the trajectory level, designed to remain realistic across diverse execution settings.
  • The paper introduces two domain-customized extensions, ATBench-Claw for OpenClaw and ATBench-CodeX for OpenAI Codex/Codex-runtime, extending ATBench to new tool-and-workflow ecosystems.
  • The core adaptation method is to analyze each execution setting and customize a three-dimensional Safety Taxonomy across risk sources, failure modes, and real-world harms, then use it to generate the benchmark specification.
  • ATBench-Claw focuses on OpenClaw-sensitive execution chains involving tools, skills, sessions, and external actions, while ATBench-CodeX targets trajectories involving repositories, shells, patches, dependencies, approvals, and runtime policy boundaries.
  • The authors argue that this extensibility is important because agent frameworks may stay architecturally stable even as concrete execution environments and product capabilities change rapidly.

Abstract

As agent systems move into increasingly diverse execution settings, trajectory-level safety evaluation and diagnosis require benchmarks that evolve with them. ATBench is a diverse and realistic agent trajectory benchmark for safety evaluation and diagnosis. This report presents ATBench-Claw and ATBench-CodeX, two domain-customized extensions that carry ATBench into the OpenClaw and OpenAI Codex / Codex-runtime settings. The key adaptation mechanism is to analyze each new setting, customize the three-dimensional Safety Taxonomy over risk source, failure mode, and real-world harm, and then use that customized taxonomy to define the benchmark specification consumed by the shared ATBench construction pipeline. This extensibility matters because agent frameworks remain relatively stable at the architectural level even as their concrete execution settings, tool ecosystems, and product capabilities evolve quickly. Concretely, ATBench-Claw targets OpenClaw-sensitive execution chains over tools, skills, sessions, and external actions, while ATBench-CodeX targets trajectories in the OpenAI Codex / Codex-runtime setting over repositories, shells, patches, dependencies, approvals, and runtime policy boundaries. Our emphasis therefore falls on taxonomy customization, domain-specific risk coverage, and benchmark design under a shared ATBench generation framework.