AI Navigate

UniSem: Generalizable Semantic 3D Reconstruction from Sparse Unposed Images

arXiv cs.CV / 3/19/2026

📰 NewsModels & Research

Key Points

  • UniSem introduces a unified framework for semantic-aware 3D reconstruction from sparse, unposed images, addressing instability and incomplete 3D semantics in prior 3D Gaussian Splatting methods.
  • It adds Error-aware Gaussian Dropout (EGD) to suppress redundant Gaussian primitives based on rendering error cues, yielding more stable geometry and improved depth estimation.
  • It also proposes Mix-training Curriculum (MTC) to blend 2D segmenter-lifted semantics with emergent 3D semantic priors through object-level prototype alignment, boosting semantic coherence.
  • Experiments on ScanNet and Replica show strong depth and open-vocabulary 3D segmentation gains, including a 15.2% reduction in depth error and a 3.7% gain in mAcc with 16 views.

Abstract

Semantic-aware 3D reconstruction from sparse, unposed images remains challenging for feed-forward 3D Gaussian Splatting (3DGS). Existing methods often predict an over-complete set of Gaussian primitives under sparse-view supervision, leading to unstable geometry and inferior depth quality. Meanwhile, they rely solely on 2D segmenter features for semantic lifting, which provides weak 3D-level and limited generalizable supervision, resulting in incomplete 3D semantics in novel scenes. To address these issues, we propose UniSem, a unified framework that jointly improves depth accuracy and semantic generalization via two key components. First, Error-aware Gaussian Dropout (EGD) performs error-guided capacity control by suppressing redundancy-prone Gaussians using rendering error cues, producing meaningful, geometrically stable Gaussian representations for improved depth estimation. Second, we introduce a Mix-training Curriculum (MTC) that progressively blends 2D segmenter-lifted semantics with the model's own emergent 3D semantic priors, implemented with object-level prototype alignment to enhance semantic coherence and completeness. Extensive experiments on ScanNet and Replica show that UniSem achieves superior performance in depth prediction and open-vocabulary 3D segmentation across varying numbers of input views. Notably, with 16-view inputs, UniSem reduces depth Rel by 15.2% and improves open-vocabulary segmentation mAcc by 3.7% over strong baselines.