AgentSearchBench: A Benchmark for AI Agent Search in the Wild

arXiv cs.AI / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • AgentSearchBench is introduced as a large-scale benchmark to evaluate AI agent search “in the wild” using nearly 10,000 real agents from multiple providers.
  • The benchmark frames agent search as retrieval plus reranking, testing both executable task queries and high-level task descriptions rather than assuming well-specified functionality.
  • It evaluates relevance using execution-grounded performance signals to reflect how agent capabilities are compositional and depend on actual execution.
  • Experiments show a persistent mismatch between semantic similarity (from descriptions) and real-world agent performance, indicating description-only ranking methods are insufficient.
  • Adding lightweight behavioral signals—such as execution-aware probing—can significantly improve ranking quality, emphasizing the need for execution signals in agent discovery.

Abstract

The rapid growth of AI agent ecosystems is transforming how complex tasks are delegated and executed, creating a new challenge of identifying suitable agents for a given task. Unlike traditional tools, agent capabilities are often compositional and execution-dependent, making them difficult to assess from textual descriptions alone. However, existing research and benchmarks typically assume well-specified functionalities, controlled candidate pools, or only executable task queries, leaving realistic agent search scenarios insufficiently studied. We introduce AgentSearchBench, a large-scale benchmark for agent search in the wild, built from nearly 10,000 real-world agents across multiple providers. The benchmark formalizes agent search as retrieval and reranking problems under both executable task queries and high-level task descriptions, and evaluates relevance using execution-grounded performance signals. Experiments reveal a consistent gap between semantic similarity and actual agent performance, exposing the limitations of description-based retrieval and reranking methods. We further show that lightweight behavioral signals, including execution-aware probing, can substantially improve ranking quality, highlighting the importance of incorporating execution signals into agent discovery. Our code is available at https://github.com/Bingo-W/AgentSearchBench.