AI Navigate

VFM-Recon: Unlocking Cross-Domain Scene-Level Neural Reconstruction with Scale-Aligned Foundation Priors

arXiv cs.CV / 3/16/2026

📰 NewsModels & Research

Key Points

  • VFMRecon offers a scale-aligned, scene-level neural reconstruction framework that leverages transferable vision foundation model priors to handle cross-domain data from monocular videos.
  • A lightweight scale alignment stage restores multiview scale coherence to address scale ambiguity in volumetric fusion.
  • The approach incorporates pretrained VFM features via lightweight task-specific adapters trained for reconstruction while preserving cross-domain robustness.
  • Evaluations on ScanNet (in-distribution) and out-of-distribution TUM RGB-D and Tanks and Temples demonstrate state-of-the-art performance, with Tanks and Temples achieving an F1 score of 70.1 versus 51.8 for VGGT.

Abstract

Scene-level neural volumetric reconstruction from monocular videos remains challenging, especially under severe domain shifts. Although recent advances in vision foundation models (VFMs) provide transferable generalized priors learned from large-scale data, their scaleambiguous predictions are incompatible with the scale consistency required by volumetric fusion. To address this gap, we present VFMRecon, the first attempt to bridge transferable VFM priors with scaleconsistent requirements in scene-level neural reconstruction. Specifically, we first introduce a lightweight scale alignment stage that restores multiview scale coherence. We then integrate pretrained VFM features into the neural volumetric reconstruction pipeline via lightweight task-specific adapters, which are trained for reconstruction while preserving the crossdomain robustness of pretrained representations. We train our model on ScanNet train split and evaluate on both in-distribution ScanNet test split and out-of-distribution TUM RGB-D and Tanks and Temples datasets. The results demonstrate that our model achieves state-of-theart performance across all datasets domains. In particular, on the challenging outdoor Tanks and Temples dataset, our model achieves an F1 score of 70.1 in reconstructed mesh evaluation, substantially outperforming the closest competitor, VGGT, which only attains 51.8.