Unifying UAV Cross-View Geo-Localization via 3D Geometric Perception

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses UAV cross-view geo-localization in GNSS-denied settings by tackling the geometric mismatch between oblique UAV imagery and orthogonal satellite maps rather than treating perspective distortion as mere appearance noise.
  • It introduces an end-to-end geometry-aware framework that reconstructs local 3D scene structure from multi-view UAV sequences using a Visual Geometry Grounded Transformer (VGGT), then renders a virtual bird’s-eye view (BEV) to orthorectify UAV perspective for alignment with satellite imagery.
  • The BEV representation acts as a geometric intermediary to unify coarse place retrieval with fine-grained pose estimation, improving 3-DoF pose regression accuracy.
  • To scale to multiple location hypotheses efficiently, the method adds a Satellite-wise Attention Block that isolates interactions between each satellite candidate and the reconstructed UAV scene while keeping computational cost linear.
  • The authors release a recalibrated University-1652 dataset with precise coordinate annotations and spatial overlap analysis, and report significant performance gains (robust meter-level localization) on University-1652 and SUES-200 versus existing baselines.

Abstract

Cross-view geo-localization for Unmanned Aerial Vehicles (UAVs) operating in GNSS-denied environments remains challenging due to the severe geometric discrepancy between oblique UAV imagery and orthogonal satellite maps. Most existing methods address this problem through a decoupled pipeline of place retrieval and pose estimation, implicitly treating perspective distortion as appearance noise rather than an explicit geometric transformation. In this work, we propose a geometry-aware UAV geo-localization framework that explicitly models the 3D scene geometry to unify coarse place recognition and fine-grained pose estimation within a single inference pipeline. Our approach reconstructs a local 3D scene from multi-view UAV image sequences using a Visual Geometry Grounded Transformer (VGGT), and renders a virtual Bird's-Eye View (BEV) representation that orthorectifies the UAV perspective to align with satellite imagery. This BEV serves as a geometric intermediary that enables robust cross-view retrieval and provides spatial priors for accurate 3 Degrees of Freedom (3-DoF) pose regression. To efficiently handle multiple location hypotheses, we introduce a Satellite-wise Attention Block that isolates the interaction between each satellite candidate and the reconstructed UAV scene, preventing inter-candidate interference while maintaining linear computational complexity. In addition, we release a recalibrated version of the University-1652 dataset with precise coordinate annotations and spatial overlap analysis, enabling rigorous evaluation of end-to-end localization accuracy. Extensive experiments on the refined University-1652 benchmark and SUES-200 demonstrate that our method significantly outperforms state-of-the-art baselines, achieving robust meter-level localization accuracy and improved generalization in complex urban environments.