URoPE: Universal Relative Position Embedding across Geometric Spaces

arXiv cs.CV / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces URoPE, a universal relative position embedding that extends Rotary Position Embedding (RoPE) beyond fixed 1D/regular-grid 2D/3D settings to handle cross-view and cross-dimensional geometric cases.
  • URoPE works by sampling 3D points along camera rays using depth anchors for each key/value image patch, projecting those points into the query image plane, and then applying standard 2D RoPE on the resulting pixel coordinates.
  • URoPE is designed to be parameter-free and intrinsics-aware, while being invariant to the choice of global coordinate systems.
  • The method is compatible with existing RoPE-optimized attention kernels and is evaluated as a plug-in positional encoding across transformer-based tasks including novel view synthesis, 3D object detection, object tracking, and depth estimation.
  • Experiments reportedly show consistent performance gains across 2D-2D, 2D-3D, and temporal scenarios, highlighting URoPE’s generality for geometric reasoning.

Abstract

Relative position embedding has become a standard mechanism for encoding positional information in Transformers. However, existing formulations are typically limited to a fixed geometric space, namely 1D sequences or regular 2D/3D grids, which restricts their applicability to many computer vision tasks that require geometric reasoning across camera views or between 2D and 3D spaces. To address this limitation, we propose URoPE, a universal extension of Rotary Position Embedding (RoPE) to cross-view or cross-dimensional geometric spaces. For each key/value image patch, URoPE samples 3D points along the corresponding camera ray at predefined depth anchors and projects them into the query image plane. Standard 2D RoPE can then be applied using the projected pixel coordinates. URoPE is a parameter-free and intrinsics-aware relative position embedding that is invariant to the choice of global coordinate systems, while remaining fully compatible with existing RoPE-optimized attention kernels. We evaluate URoPE as a plug-in positional encoding for transformer architectures across a diverse set of tasks, including novel view synthesis, 3D object detection, object tracking, and depth estimation, covering 2D-2D, 2D-3D, and temporal scenarios. Experiments show that URoPE consistently improves the performance of transformer-based models across all tasks, demonstrating its effectiveness and generality for geometric reasoning. Our project website is: https://urope-pe.github.io/.