GeoBrowse: A Geolocation Benchmark for Agentic Tool Use with Expert-Annotated Reasoning Traces

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • GeoBrowse is introduced as a geolocation benchmark designed to evaluate agentic tool use that must combine fragmented visual cues with knowledge-intensive, multi-hop web verification.
  • The benchmark has two difficulty levels: Level 1 focuses on extracting and composing ambiguous visual cues, while Level 2 adds long-tail knowledge requirements and obfuscation of key entities.
  • To enable rigorous evaluation, the authors release an agentic workflow called GATE, including five “think-with-image” tools and four knowledge-intensive tools, plus expert-annotated, stepwise reasoning traces grounded in verifiable evidence.
  • Experiments indicate that GATE outperforms direct inference and existing open-source agents, and that improvements come more from coherent, level-specific tool-use planning than from simply using more tools.
  • The GeoBrowse benchmark and code are released publicly on GitHub to support trajectory-level analysis and more reliable assessment of tool-using agents.

Abstract

Deep research agents integrate fragmented evidence through multi-step tool use. BrowseComp offers a text-only testbed for such agents, but existing multimodal benchmarks rarely require both weak visual cues composition and BrowseComp-style multi-hop verification. Geolocation is a natural testbed because answers depend on combining multiple ambiguous visual cues and validating them with open-web evidence. Thus, we introduce GeoBrowse, a geolocation benchmark that combines visual reasoning with knowledge-intensive multi-hop queries. Level 1 tests extracting and composing fragmented visual cues, and Level 2 increases query difficulty by injecting long-tail knowledge and obfuscating key entities. To support evaluation, we provide an agentic workflow GATE with five think-with-image tools and four knowledge-intensive tools, and release expert-annotated stepwise traces grounded in verifiable evidence for trajectory-level analysis. Experiments show that GATE outperforms direct inference and open-source agents, indicating that no-tool, search-only or image-only setups are insufficient. Gains come from coherent, level-specific tool-use plans rather than more tool calls, as they more reliably reach annotated key evidence steps and make fewer errors when integrating into the final decision. The GeoBrowse bernchmark and codes are provided in https://github.com/ornamentt/GeoBrowse