Training-Free Instance-Aware 3D Scene Reconstruction and Diffusion-Based View Synthesis from Sparse Images

arXiv cs.CV / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a training-free pipeline for reconstructing and rendering 3D indoor scenes from a sparse set of unposed RGB images, avoiding both per-scene optimization and pose preprocessing required by many radiance-field methods.
  • It combines a robust point-cloud reconstruction step with a warping-based anomaly removal strategy to filter unreliable geometry, improving reconstruction quality under limited input.
  • The method lifts 2D segmentation masks into a consistent, instance-aware 3D representation using a warping-guided 2D-to-3D mechanism, enabling more structured scene understanding.
  • For novel view synthesis, it projects the reconstructed point cloud into new viewpoints and refines results with a 3D-aware diffusion model to enhance realism despite missing geometry.
  • The authors show object-level editing (e.g., instance removal) can be performed by modifying only the point cloud, producing consistent edited views without retraining, supporting efficient editable 3D content generation.

Abstract

We introduce a novel, training-free system for reconstructing, understanding, and rendering 3D indoor scenes from a sparse set of unposed RGB images. Unlike traditional radiance field approaches that require dense views and per-scene optimization, our pipeline achieves high-fidelity results without any training or pose preprocessing. The system integrates three key innovations: (1) A robust point cloud reconstruction module that filters unreliable geometry using a warping-based anomaly removal strategy; (2) A warping-guided 2D-to-3D instance lifting mechanism that propagates 2D segmentation masks into a consistent, instance-aware 3D representation; and (3) A novel rendering approach that projects the point cloud into new views and refines the renderings with a 3D-aware diffusion model. Our method leverages the generative power of diffusion to compensate for missing geometry and enhances realism, especially under sparse input conditions. We further demonstrate that object-level scene editing such as instance removal can be naturally supported in our pipeline by modifying only the point cloud, enabling the synthesis of consistent, edited views without retraining. Our results establish a new direction for efficient, editable 3D content generation without relying on scene-specific optimization. Project page: https://jiatongxia.github.io/TID3R/