AgentHazard: A Benchmark for Evaluating Harmful Behavior in Computer-Use Agents

arXiv cs.AI / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces AgentHazard, a benchmark designed to evaluate harmful behavior specifically in computer-use agents that perform multi-step actions with persistent state across interactions.
  • AgentHazard includes 2,653 instances that pair harmful objectives with step sequences where each intermediate action is locally plausible, but the combined sequence leads to unauthorized or unsafe outcomes.
  • The benchmark tests whether agents can detect and interrupt harm that emerges from accumulated context, repeated tool use, intermediate actions, and cross-step dependencies.
  • Experiments on Claude Code, OpenClaw, and IFlow using open or openly deployable models (e.g., Qwen3, Kimi, GLM, DeepSeek) show high vulnerability, including a 73.63% attack success rate for Claude Code with Qwen3-Coder.
  • The results suggest that existing alignment approaches may be insufficient for ensuring safety in autonomous, tool-using agents because harmful behavior can arise through sequential, dependency-driven execution.

Abstract

Computer-use agents extend language models from text generation to persistent action over tools, files, and execution environments. Unlike chat systems, they maintain state across interactions and translate intermediate outputs into concrete actions. This creates a distinct safety challenge in that harmful behavior may emerge through sequences of individually plausible steps, including intermediate actions that appear locally acceptable but collectively lead to unauthorized actions. We present \textbf{AgentHazard}, a benchmark for evaluating harmful behavior in computer-use agents. AgentHazard contains \textbf{2,653} instances spanning diverse risk categories and attack strategies. Each instance pairs a harmful objective with a sequence of operational steps that are locally legitimate but jointly induce unsafe behavior. The benchmark evaluates whether agents can recognize and interrupt harm arising from accumulated context, repeated tool use, intermediate actions, and dependencies across steps. We evaluate AgentHazard on Claude Code, OpenClaw, and IFlow using mostly open or openly deployable models from the Qwen3, Kimi, GLM, and DeepSeek families. Our experimental results indicate that current systems remain highly vulnerable. In particular, when powered by Qwen3-Coder, Claude Code exhibits an attack success rate of \textbf{73.63\%}, suggesting that model alignment alone does not reliably guarantee the safety of autonomous agents.