Where Do Vision-Language Models Fail? World Scale Analysis for Image Geolocalization

arXiv cs.CV / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates multiple state-of-the-art vision-language models (VLMs) for country-level image geolocalization using only ground-view images in a zero-shot, prompt-based setup.
  • Unlike prior approaches that rely on image matching, GPS metadata, or specialized training, the study tests pure semantic/geographic inference from model prompts.
  • Experiments across three geographically diverse datasets show large performance differences between models, indicating uneven robustness and generalization.
  • The findings suggest VLMs can support coarse geolocalization via semantic reasoning, but they struggle to capture fine-grained geographic cues needed for more precise localization.
  • The work is positioned as the first focused comparison of modern VLMs for country-level geolocalization, laying groundwork for future research on multimodal geographic understanding.

Abstract

Image geolocalization has traditionally been addressed through retrieval-based place recognition or geometry-based visual localization pipelines. Recent advances in Vision-Language Models (VLMs) have demonstrated strong zero-shot reasoning capabilities across multimodal tasks, yet their performance in geographic inference remains underexplored. In this work, we present a systematic evaluation of multiple state-of-the-art VLMs for country-level image geolocalization using ground-view imagery only. Instead of relying on image matching, GPS metadata, or task-specific training, we evaluate prompt-based country prediction in a zero-shot setting. The selected models are tested on three geographically diverse datasets to assess their robustness and generalization ability. Our results reveal substantial variation across models, highlighting the potential of semantic reasoning for coarse geolocalization and the limitations of current VLMs in capturing fine-grained geographic cues. This study provides the first focused comparison of modern VLMs for country-level geolocalization and establishes a foundation for future research at the intersection of multimodal reasoning and geographic understanding.