Robust 4D Visual Geometry Transformer with Uncertainty-Aware Priors

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a “Robust 4D Visual Geometry Transformer” to reconstruct dynamic 4D scenes by explicitly separating dynamic motion effects from static/semantic ambiguity.
  • It introduces uncertainty-aware components including entropy-guided subspace projection, geometry purification via local spatial consistency, and uncertainty-weighted cross-view consistency using heteroscedastic maximum likelihood.
  • By modeling depth confidence as a probabilistic weight during multi-view refinement, the method better handles geometric uncertainty caused by motion.
  • Experiments on dynamic benchmarks report substantial gains over existing state-of-the-art methods, including a 13.43% reduction in Mean Accuracy error and a 10.49% improvement in segmentation F-measure.
  • The approach is designed to keep feed-forward inference efficiency and avoid task-specific fine-tuning or per-scene optimization.

Abstract

Reconstructing dynamic 4D scenes is an important yet challenging task. While 3D foundation models like VGGT excel in static settings, they often struggle with dynamic sequences where motion causes significant geometric ambiguity. To address this, we present a framework designed to disentangle dynamic and static components by modeling uncertainty across different stages of the reconstruction process. Our approach introduces three synergistic mechanisms: (1) Entropy-Guided Subspace Projection, which leverages information-theoretic weighting to adaptively aggregate multi-head attention distributions, effectively isolating dynamic motion cues from semantic noise; (2) Local-Consistency Driven Geometry Purification, which enforces spatial continuity via radius-based neighborhood constraints to eliminate structural outliers; and (3) Uncertainty-Aware Cross-View Consistency, which formulates multi-view projection refinement as a heteroscedastic maximum likelihood estimation problem, utilizing depth confidence as a probabilistic weight. Experiments on dynamic benchmarks show that our approach outperforms current state-of-the-art methods, reducing Mean Accuracy error by 13.43\% and improving segmentation F-measure by 10.49\%. Our framework maintains the efficiency of feed-forward inference and requires no task-specific fine-tuning or per-scene optimization.