InHabit: Leveraging Image Foundation Models for Scalable 3D Human Placement

arXiv cs.CV / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces InHabit, an automatic, scalable pipeline to populate 3D scenes with humans who meaningfully interact with their environment, addressing the lack of large-scale human-scene interaction data.
  • InHabit transfers knowledge from internet-scale 2D image foundation models to 3D by following a render-generate-lift workflow that uses a vision-language model to propose actions, an image-editing model to insert humans, and an optimization step to produce physically plausible SMPL-X bodies aligned with the scene.
  • Using Habitat-Matterport3D, InHabit generates a large-scale photorealistic dataset with 78K samples across 800 building-scale scenes, including full 3D geometry, SMPL-X bodies, and RGB images.
  • Experiments show that adding InHabit’s synthetic data improves RGB-based 3D human-scene reconstruction and contact estimation, and a user study finds the generated data preferred in 78% of comparisons versus state of the art.
  • Overall, the work demonstrates a practical method for creating richer 3D training data by combining foundation models with geometry-aware optimization rather than relying on simple synthetic heuristics.

Abstract

Training embodied agents to understand 3D scenes as humans do requires large-scale data of people meaningfully interacting with diverse environments, yet such data is scarce. Real-world motion capture is costly and limited to controlled settings, while existing synthetic datasets rely on simple geometric heuristics that ignore rich scene context. In contrast, 2D foundation models trained on internet-scale data have implicitly acquired commonsense knowledge of human-environment interactions. To transfer this knowledge into 3D, we introduce InHabit, a fully automatic and scalable data generator for populating 3D scenes with interacting humans. InHabit follows a render-generate-lift principle: given a rendered 3D scene, a vision-language model proposes contextually meaningful actions, an image-editing model inserts a human, and an optimization procedure lifts the edited result into physically plausible SMPL-X bodies aligned with the scene geometry. Applied to Habitat-Matterport3D, InHabit produces the first large-scale photorealistic 3D human-scene interaction dataset, containing 78K samples across 800 building-scale scenes with complete 3D geometry, SMPL-X bodies, and RGB images. Augmenting standard training data with our samples improves RGB-based 3D human-scene reconstruction and contact estimation, and in a perceptual user study our data is preferred in 78% of cases over the state of the art.