Benchmarking LLM Tool-Use in the Wild

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that real-world LLM tool-use is “wild” and that benchmark results can be misleading because user interactions are messy, flexible, and multi-turn.
  • It identifies three recurring challenges from observed user behavior: efficiently orchestrating complex compositional tool calls, inferring implicit intent spread across dialogue turns, and dynamically handling instruction transitions that mix task work with clarification and casual conversation.
  • It introduces WildToolBench, a tool-use benchmark designed around real user behavior patterns rather than artificially constrained task setups.
  • In evaluations of 57 LLMs, the study finds no model exceeds 15% accuracy, suggesting a large robustness gap in current agentic tool-use capabilities.
  • The authors conclude that improving tool-use should focus more on the interaction between LLMs, users, and tools than on merely increasing task complexity.
  • It classifies this work as an arXiv announcement, framing it as a research/benchmarking contribution to better measure agentic tool-use in practice.

Abstract

Fulfilling user needs through Large Language Model multi-turn, multi-step tool-use is rarely a straightforward process. Real user interactions are inherently wild, being intricate, messy, and flexible. We identify three key challenges from user behaviour: compositional tasks that demand efficient orchestration of tool-call topologies, implicit intent spread across dialogue turns that require contextual inference, and instruction transition, which mixes task queries, clarifications, and casual conversation, forcing LLMs to adjust their policies on the fly. Existing benchmarks overlook these behaviors, making the apparent progress of LLMs on tool-use spurious. To address this, we introduce WildToolBench, an LLM tool-use benchmark grounded in real-world user behavior patterns. Comprehensive evaluations of 57 LLMs reveal that no model achieves an accuracy of more than 15%, indicating a substantial gap in the robustness of LLMs' agentic ability. Controlled experiments and in-depth analyses further indicate that the real challenge for LLM tool-use lies not in artificially complex tasks, but in the wild nature of user behavior, emphasizing the need to reconsider the interactions among LLMs, users, and tools.