Benchmarks for Trajectory Safety Evaluation and Diagnosis in OpenClaw and Codex: ATBench-Claw and ATBench-CodeX
arXiv cs.AI / 4/17/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- ATBench is presented as a benchmark for evaluating and diagnosing agent safety at the trajectory level, designed to remain realistic across diverse execution settings.
- The paper introduces two domain-customized extensions, ATBench-Claw for OpenClaw and ATBench-CodeX for OpenAI Codex/Codex-runtime, extending ATBench to new tool-and-workflow ecosystems.
- The core adaptation method is to analyze each execution setting and customize a three-dimensional Safety Taxonomy across risk sources, failure modes, and real-world harms, then use it to generate the benchmark specification.
- ATBench-Claw focuses on OpenClaw-sensitive execution chains involving tools, skills, sessions, and external actions, while ATBench-CodeX targets trajectories involving repositories, shells, patches, dependencies, approvals, and runtime policy boundaries.
- The authors argue that this extensibility is important because agent frameworks may stay architecturally stable even as concrete execution environments and product capabilities change rapidly.
Related Articles

Reported ban on ‘sex robots’ by online platform fuels debate on AI boundaries and content moderation
Reddit r/artificial

FastAPI With LangChain and MongoDB
Dev.to
Best AI Game Creator in 2026
Dev.to
![[Patterns] AI Agent Error Handling That Actually Works](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Frn5czaopq2vzo7cglady.png&w=3840&q=75)
[Patterns] AI Agent Error Handling That Actually Works
Dev.to

Building ONNX Embedding Workflows in Oracle AI Database with Python
Dev.to