SMFormer: Empowering Self-supervised Stereo Matching via Foundation Models and Data Augmentation

arXiv cs.CV / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SMFormer, a self-supervised stereo matching framework that addresses failures of the photometric consistency assumption under real-world disturbances.
  • SMFormer integrates a Vision Foundation Model (VFM) with a Feature Pyramid Network (FPN) to obtain more discriminative, disturbance-robust feature representations.
  • It proposes a data augmentation strategy that enforces feature consistency under illumination variations and regularizes disparity output consistency between strongly augmented and standard samples.
  • Experiments on multiple benchmarks show SMFormer reaches state-of-the-art performance among self-supervised stereo methods and can approach supervised-level results.
  • On the challenging Booster benchmark, SMFormer reportedly outperforms some supervised SOTA approaches such as CFNet.

Abstract

Recent self-supervised stereo matching methods have made significant progress. They typically rely on the photometric consistency assumption, which presumes corresponding points across views share the same appearance. However, this assumption could be compromised by real-world disturbances, resulting in invalid supervisory signals and a significant accuracy gap compared to supervised methods. To address this issue, we propose SMFormer, a framework integrating more reliable self-supervision guided by the Vision Foundation Model (VFM) and data augmentation. We first incorporate the VFM with the Feature Pyramid Network (FPN), providing a discriminative and robust feature representation against disturbance in various scenarios. We then devise an effective data augmentation mechanism that ensures robustness to various transformations. The data augmentation mechanism explicitly enforces consistency between learned features and those influenced by illumination variations. Additionally, it regularizes the output consistency between disparity predictions of strong augmented samples and those generated from standard samples. Experiments on multiple mainstream benchmarks demonstrate that our SMFormer achieves state-of-the-art (SOTA) performance among self-supervised methods and even competes on par with supervised ones. Remarkably, in the challenging Booster benchmark, SMFormer even outperforms some SOTA supervised methods, such as CFNet.