Physically Grounded 3D Generative Reconstruction under Hand Occlusion using Proprioception and Multi-Contact Touch

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a multimodal, physically grounded 3D generative reconstruction method for metric-scale amodal object completion under severe hand occlusion using proprioception and multi-contact tactile signals.
  • It represents the object as a pose-aware, camera-aligned signed distance field (SDF) and learns a compact structure latent via a Structure-VAE, then models distributions in that space with a conditional flow-matching diffusion model.
  • Training uses a vision-only pretraining stage followed by finetuning on occluded manipulation scenes, conditioning on visible RGB evidence, occlusion/visibility masks, hand latent state, and tactile contact information.
  • To improve physical plausibility, the method introduces physics-based objectives and differentiable decoder-guidance to reduce hand–object interpenetration and to better satisfy contact constraints.
  • Experiments in simulation show substantial gains over vision-only baselines for occluded completion, and the approach is validated via transfer to a real humanoid robot.

Abstract

We propose a multimodal, physically grounded approach for metric-scale amodal object reconstruction and pose estimation under severe hand occlusion. Unlike prior occlusion-aware 3D generation methods that rely only on vision, we leverage physical interaction signals: proprioception provides the posed hand geometry, and multi-contact touch constrains where the object surface must lie, reducing ambiguity in occluded regions. We represent object structure as a pose-aware, camera-aligned signed distance field (SDF) and learn a compact latent space with a Structure-VAE. In this latent space, we train a conditional flow-matching diffusion model, pretraining on vision-only images and finetuning on occluded manipulation scenes while conditioning on visible RGB evidence, occluder/visibility masks, the hand latent representation, and tactile information. Crucially, we incorporate physics-based objectives and differentiable decoder-guidance during finetuning and inference to reduce hand--object interpenetration and to align the reconstructed surface with contact observations. Because our method produces a metric, physically consistent structure estimate, it integrates naturally into existing two-stage reconstruction pipelines, where a downstream module refines geometry and predicts appearance. Experiments in simulation show that adding proprioception and touch substantially improves completion under occlusion and yields physically plausible reconstructions at correct real-world scale compared to vision-only baselines; we further validate transfer by deploying the model on a real humanoid robot with an end-effector different from those used during training.