KitchenTwin: Semantically and Geometrically Grounded 3D Kitchen Digital Twins

arXiv cs.CV / 3/27/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper addresses a key limitation in 3D kitchen digital twins: transformer-based global point cloud predictions from monocular video lack metric scale and consistent coordinates, making fusion with locally reconstructed object meshes unreliable.
  • It proposes a scale-aware 3D fusion framework that uses a VLM-guided geometric anchoring mechanism to recover real-world metric scale and resolve coordinate mismatches.
  • A geometry-aware registration pipeline enforces physical plausibility by aligning gravity for vertical estimation, applying Manhattan-world structural constraints, and performing collision-free local refinement.
  • Experiments on real indoor kitchen scenes show improved object alignment and geometric consistency, benefiting downstream tasks like multi-primitive fitting and metric measurement.
  • The authors also release an open-source indoor digital twin dataset featuring metrically scaled scenes and semantically grounded, registered object-centric mesh annotations.

Abstract

Embodied AI training and evaluation require object-centric digital twin environments with accurate metric geometry and semantic grounding. Recent transformer-based feedforward reconstruction methods can efficiently predict global point clouds from sparse monocular videos, yet these geometries suffer from inherent scale ambiguity and inconsistent coordinate conventions. This mismatch prevents the reliable fusion of these dimensionless point cloud predictions with locally reconstructed object meshes. We propose a novel scale-aware 3D fusion framework that registers visually grounded object meshes with transformer-predicted global point clouds to construct metrically consistent digital twins. Our method introduces a Vision-Language Model (VLM)-guided geometric anchor mechanism that resolves this fundamental coordinate mismatch by recovering an accurate real-world metric scale. To fuse these networks, we propose a geometry-aware registration pipeline that explicitly enforces physical plausibility through gravity-aligned vertical estimation, Manhattan-world structural constraints, and collision-free local refinement. Experiments on real indoor kitchen environments demonstrate improved cross-network object alignment and geometric consistency for downstream tasks, including multi-primitive fitting and metric measurement. We additionally introduce an open-source indoor digital twin dataset with metrically scaled scenes and semantically grounded and registered object-centric mesh annotations.