StereoVGGT: A Training-Free Visual Geometry Transformer for Stereo Vision

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many current stereo-vision backbones (from MDE or visual foundation models) are pretrained without explicit camera-pose supervision, which limits stereo geometry performance.
  • It studies VGGT, a visual-geometry grounded transformer pretrained with 3D priors including camera poses, but finds that direct use on stereo tasks degrades geometric details during feature extraction.
  • To address this, the authors propose StereoVGGT, which keeps VGGT frozen and applies a training-free feature adjustment pipeline to reduce geometric degradation and better exploit embedded camera-calibration knowledge.
  • A stereo matching network built on StereoVGGT reportedly achieved 1st rank among published methods on the KITTI benchmark, suggesting the approach is an effective stereo backbone.

Abstract

Driven by the advancement of 3D devices, stereo vision tasks including stereo matching and stereo conversion have emerged as a critical research frontier. Contemporary stereo vision backbones typically rely on either monocular depth estimation (MDE) models or visual foundation models (VFMs). Crucially, these models are predominantly pretrained without explicit supervision of camera poses. Given that such geometric knowledge is indispensable for stereo vision, the absence of explicit spatial constraints constitutes a significant performance bottleneck for existing architectures. Recognizing that the Visual Geometry Grounded Transformer (VGGT) operates as a foundation model pretrained on extensive 3D priors, including camera poses, we investigate its potential as a robust backbone for stereo vision tasks. Nevertheless, empirical results indicate that its direct application to stereo vision yields suboptimal performance. We observe that VGGT suffers from a more significant degradation of geometric details during feature extraction. Such characteristics conflict with the requirements of binocular stereo vision, thereby constraining its efficacy for relative tasks. To bridge this gap, we propose StereoVGGT, a feature backbone specifically tailored for stereo vision. By leveraging the frozen VGGT and introducing a training-free feature adjustment pipeline, we mitigate geometric degradation and harness the latent camera calibration knowledge embedded within the model. StereoVGGT-based stereo matching network achieved the 1^{st} rank among all published methods on the KITTI benchmark, validating that StereoVGGT serves as a highly effective backbone for stereo vision.