UniCon3R: Contact-aware 3D Human-Scene Reconstruction from Monocular Video

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • UniCon3R introduces a feed-forward framework for real-time 4D human-scene reconstruction from monocular video, producing world-coordinate human motion and scene geometry jointly.
  • The work argues that prior methods’ physically implausible artifacts (e.g., floating bodies or penetrations) stem from not modeling human-environment physical interactions.
  • UniCon3R predicts 3D human-scene contact from human pose and scene geometry, and uses contact not only as an auxiliary signal but as an active corrective cue during pose generation.
  • Experiments on RICH, EMDB, 3DPW, and SLOPER4D show improved physical plausibility and better global human motion estimation versus state-of-the-art baselines, while maintaining real-time online inference.
  • The authors claim contact functions as a powerful internal prior for physically grounded joint reconstruction, suggesting a new paradigm for the task.

Abstract

We introduce UniCon3R (Unified Contact-aware 3D Reconstruction), a unified feed-forward framework for online human-scene 4D reconstruction from monocular videos. Recent feed-forward methods enable real-time world-coordinate human motion and scene reconstruction, but they often produce physically implausible artifacts such as bodies floating above the ground or penetrating parts of the scene. The key reason is that existing approaches fail to model physical interactions between the human and the environment. A natural next step is to predict human-scene contact as an auxiliary output -- yet we find this alone is not sufficient: contact must actively correct the reconstruction. To address this, we explicitly model interaction by inferring 3D contact from the human pose and scene geometry and use the contact as a corrective cue for generating the final pose. This enables UniCon3R to jointly recover high-fidelity scene geometry and spatially aligned 3D humans within the scene. Experiments on standard human-centric video benchmarks such as RICH, EMDB, 3DPW and SLOPER4D show that UniCon3R outperforms state-of-the-art baselines on physical plausibility and global human motion estimation while achieving real-time online inference. We experimentally demonstrate that contact serves as a powerful internal prior rather than just an external metric, thus establishing a new paradigm for physically grounded joint human-scene reconstruction. Project page is available at https://surtantheta.github.io/UniCon3R .