SegVGGT: Joint 3D Reconstruction and Instance Segmentation from Multi-View Images

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • SegVGGT introduces a unified end-to-end framework that jointly performs feed-forward 3D reconstruction and instance segmentation directly from multi-view RGB images.
  • It leverages object queries that interact with multi-level geometric features to integrate instance identification into the visual geometry grounded transformer.
  • A Frame-level Attention Distribution Alignment (FADA) strategy guides object queries to attend to instance-relevant frames during training, reducing attention dispersion without increasing inference cost.
  • The approach achieves state-of-the-art performance on ScanNetv2 and ScanNet200 and demonstrates strong generalization on ScanNet++.
  • By enabling RGB-only inputs for joint reconstruction and segmentation, SegVGGT reduces reliance on high-quality point clouds and decoupled processing pipelines.

Abstract

3D instance segmentation methods typically rely on high-quality point clouds or posed RGB-D scans, requiring complex multi-stage processing pipelines, and are highly sensitive to reconstruction noise. While recent feed-forward transformers have revolutionized multi-view 3D reconstruction, they remain decoupled from high-level semantic understanding. In this work, we present SegVGGT, a unified end-to-end framework that simultaneously performs feed-forward 3D reconstruction and instance segmentation directly from multi-view RGB images. By introducing object queries that interact with multi-level geometric features, our method deeply integrates instance identification into the visual geometry grounded transformer. To address the severe attention dispersion problem caused by the massive number of global image tokens, we propose the Frame-level Attention Distribution Alignment (FADA) strategy. FADA explicitly guides object queries to attend to instance-relevant frames during training, providing structured supervision without extra inference overhead. Extensive experiments demonstrate that SegVGGT achieves the state-of-the-art performance on ScanNetv2 and ScanNet200, outperforming both recent joint models and RGB-D-based approaches, while exhibiting strong generalization capabilities on ScanNet++.