Sparsity-Aware Voxel Attention and Foreground Modulation for 3D Semantic Scene Completion

arXiv cs.CV / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets monocular Semantic Scene Completion (SSC), where most 3D voxels are empty (>93%) and foreground/long-tail classes are rare, making learning and generalization difficult.
  • It proposes VoxSAMNet, a sparsity-aware and semantic-imbalance unified framework that uses a DSFR module to route/skip empty voxels via a shared dummy node while applying deformable attention to occupied voxels.
  • To improve class-relevant representations and reduce overfitting, it introduces a Foreground Modulation Strategy combining Foreground Dropout (FD) and a Text-Guided Image Filter (TGIF).
  • Experiments on SemanticKITTI and SSCBench-KITTI-360 report state-of-the-art results, with mIoU improvements to 18.2% (monocular) and 20.2% (stereo) over prior baselines.
  • The authors argue that explicitly modeling voxel sparsity and semantic imbalance is key for efficient and accurate 3D scene completion, motivating future research in semantics-guided sparse 3D architectures.

Abstract

Monocular Semantic Scene Completion (SSC) aims to reconstruct complete 3D semantic scenes from a single RGB image, offering a cost-effective solution for autonomous driving and robotics. However, the inherently imbalanced nature of voxel distributions, where over 93% of voxels are empty and foreground classes are rare, poses significant challenges. Existing methods often suffer from redundant emphasis on uninformative voxels and poor generalization to long-tailed categories. To address these issues, we propose VoxSAMNet (Voxel Sparsity-Aware Modulation Network), a unified framework that explicitly models voxel sparsity and semantic imbalance. Our approach introduces: (1) a Dummy Shortcut for Feature Refinement (DSFR) module that bypasses empty voxels via a shared dummy node while refining occupied ones with deformable attention; and (2) a Foreground Modulation Strategy combining Foreground Dropout (FD) and Text-Guided Image Filter (TGIF) to alleviate overfitting and enhance class-relevant features. Extensive experiments on the public benchmarks SemanticKITTI and SSCBench-KITTI-360 demonstrate that VoxSAMNet achieves state-of-the-art performance, surpassing prior monocular and stereo baselines with mIoU scores of 18.2% and 20.2%, respectively. Our results highlight the importance of sparsity-aware and semantics-guided design for efficient and accurate 3D scene completion, offering a promising direction for future research.