AI Navigate

ZEBRAARENA: A Diagnostic Simulation Environment for Studying Reasoning-Action Coupling in Tool-Augmented LLMs

arXiv cs.AI / 3/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • ZebraArena is a procedurally generated diagnostic environment for studying reasoning-action coupling in tool-augmented LLMs, with controllable difficulty and a knowledge-minimal design to limit memorization gains.
  • Tasks in ZebraArena require information available only via targeted tool use, creating an interpretable interface between external information acquisition and deductive reasoning.
  • The environment supports deterministic evaluation with unique solutions and a theoretical optimal query count to measure efficient tool usage, and experiments show frontier models like GPT-5 and Gemini 2.5 Pro achieving about 60% accuracy on hard instances.
  • The study highlights gaps between theoretical optimality and practical tool usage, noting that GPT-5 uses 70-270% more tool calls than the theoretical optimum, stressing the need for further research into reasoning-with-action in LLMs.

Abstract

Tool-augmented large language models (LLMs) must tightly couple multi-step reasoning with external actions, yet existing benchmarks often confound this interplay with complex environment dynamics, memorized knowledge or dataset contamination. In this paper, we introduce ZebraArena, a procedurally generated diagnostic environment for studying reasoning-action coupling in tool-augmented LLMs, with controllable difficulty and a knowledge-minimal design, which limits gains from memorization or dataset contamination. Each task in ZebraArena requires a set of critical information which is available only through targeted tool use, yielding an interpretable interface between external information acquisition and deductive reasoning. This design provides deterministic evaluation via unique solutions, and a theoretical optimal query count for measuring efficient tool use. We show that ZebraArena requires a combination of in-depth reasoning and accurate external tool calling, which remains a challenge as frontier reasoning models such as GPT-5 and Gemini 2.5 Pro only achieves 60% accuracy on the hard instances. We also observe a persistent gaps between theoretical optimality and practical tool usage. For example, GPT-5 uses 70-270% more tool calls than the theoretical optimum. We highlight the key findings in our evaluation, and hope ZebraArena stimulates further research on the interplay between internal reasoning and external action.