AGILE: Hand-Object Interaction Reconstruction from Video via Agentic Generation

arXiv cs.RO / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • AGILEは、単眼動画から手-物体の動的相互作用を再構成するために、従来の「再構成中心」から「エージェント的生成中心」へパラダイムを転換するフレームワークを提案しています。
  • VLMが生成モデルをガイドして、重度オクルージョンでも断片化しにくく、シミュレーションに使える「完全でwatertightな物体メッシュ(高品質テクスチャ付き)」を合成します。
  • SfMの脆さを避けるため、単一の相互作用開始フレームで基盤モデルにより初期化した物体ポーズを、生成アセットと観測の視覚類似性を手がかりに時間方向へ追跡・伝播する「anchor-and-track」戦略を採用しています。
  • contact-awareな最適化でセマンティクス・幾何・接触・相互作用の安定性制約を統合し、物理的もっともらしさを高めた結果、HO3D/DexYCBおよびin-the-wildで既存手法より全体的幾何精度と頑健性が向上したと報告しています。

Abstract

Reconstructing dynamic hand-object interactions from monocular videos is critical for dexterous manipulation data collection and creating realistic digital twins for robotics and VR. However, current methods face two prohibitive barriers: (1) reliance on neural rendering often yields fragmented, non-simulation-ready geometries under heavy occlusion, and (2) dependence on brittle Structure-from-Motion (SfM) initialization leads to frequent failures on in-the-wild footage. To overcome these limitations, we introduce AGILE, a robust framework that shifts the paradigm from reconstruction to agentic generation for interaction learning. First, we employ an agentic pipeline where a Vision-Language Model (VLM) guides a generative model to synthesize a complete, watertight object mesh with high-fidelity texture, independent of video occlusions. Second, bypassing fragile SfM entirely, we propose a robust anchor-and-track strategy. We initialize the object pose at a single interaction onset frame using a foundation model and propagate it temporally by leveraging the strong visual similarity between our generated asset and video observations. Finally, a contact-aware optimization integrates semantic, geometric, and interaction stability constraints to enforce physical plausibility. Extensive experiments on HO3D, DexYCB, and in-the-wild videos reveal that AGILE outperforms baselines in global geometric accuracy while demonstrating exceptional robustness on challenging sequences where prior art frequently collapses. By prioritizing physical validity, our method produces simulation-ready assets validated via real-to-sim retargeting for robotic applications.