ClawArena: Benchmarking AI Agents in Evolving Information Environments

arXiv cs.LG / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • ClawArena is introduced as a new benchmark to test AI agents that must maintain correct beliefs while information environments evolve and contradict across heterogeneous sources.
  • The benchmark scenarios include hidden ground truth and expose agents to noisy, partial, and sometimes conflicting traces across multi-channel sessions, workspace files, and staged updates.
  • Evaluation targets three coupled abilities: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, organized into a 14-category question taxonomy.
  • It uses two answer formats—multi-choice set selection and shell-based executable checks—to assess both reasoning quality and workspace grounding.
  • Initial experiments across five agent frameworks and five language models find that both model capability and framework design materially affect performance, and that belief revision difficulty depends on update design strategy rather than simply having updates; the release provides 64 scenarios across 8 professional domains plus code on GitHub.

Abstract

AI agents deployed as persistent assistants must maintain correct beliefs as their information environment evolves. In practice, evidence is scattered across heterogeneous sources that often contradict one another, new information can invalidate earlier conclusions, and user preferences surface through corrections rather than explicit instructions. Existing benchmarks largely assume static, single-authority settings and do not evaluate whether agents can keep up with this complexity. We introduce ClawArena, a benchmark for evaluating AI agents in evolving information environments. Each scenario maintains a complete hidden ground truth while exposing the agent only to noisy, partial, and sometimes contradictory traces across multi-channel sessions, workspace files, and staged updates. Evaluation is organized around three coupled challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, whose interactions yield a 14-category question taxonomy. Two question formats, multi-choice (set-selection) and shell-based executable checks, test both reasoning and workspace grounding. The current release contains 64 scenarios across 8 professional domains, totaling 1{,}879 evaluation rounds and 365 dynamic updates. Experiments on five agent frameworks and five language models show that both model capability (15.4% range) and framework design (9.2%) substantially affect performance, that self-evolving skill frameworks can partially close model-capability gaps, and that belief revision difficulty is determined by update design strategy rather than the mere presence of updates. Code is available at https://github.com/aiming-lab/ClawArena.