AI Navigate

VIRD: View-Invariant Representation through Dual-Axis Transformation for Cross-View Pose Estimation

arXiv cs.CV / 3/16/2026

💬 OpinionModels & Research

Key Points

  • VIRD introduces a cross-view pose estimation approach that learns a view-invariant representation to bridge the gap between ground and satellite imagery.
  • It constructs horizontal correspondences by applying a polar transform to the satellite view and uses context-enhanced positional attention to reduce vertical misalignment.
  • A view-reconstruction loss further enforces invariance by encouraging the model to reconstruct both the cross-view and original images.
  • On KITTI and VIGOR, VIRD achieves large reductions in median position and orientation errors, for example 50.7% and 76.5% on KITTI and 18.0% and 46.8% on VIGOR, without orientation priors.

Abstract

Accurate global localization is crucial for autonomous driving and robotics, but GNSS-based approaches often degrade due to occlusion and multipath effects. As an emerging alternative, cross-view pose estimation predicts the 3-DoF camera pose corresponding to a ground-view image with respect to a geo-referenced satellite image. However, existing methods struggle to bridge the significant viewpoint gap between the ground and satellite views mainly due to limited spatial correspondences. We propose a novel cross-view pose estimation method that constructs view-invariant representations through dual-axis transformation (VIRD). VIRD first applies a polar transformation to the satellite view to establish horizontal correspondence, then uses context-enhanced positional attention on the ground and polar-transformed satellite features to resolve vertical misalignment, explicitly mitigating the viewpoint gap. A view-reconstruction loss is introduced to strengthen the view invariance further, encouraging the derived representations to reconstruct the original and cross-view images. Experiments on the KITTI and VIGOR datasets demonstrate that VIRD outperforms the state-of-the-art methods without orientation priors, reducing median position and orientation errors by 50.7% and 76.5% on KITTI, and 18.0% and 46.8% on VIGOR, respectively.