AI Navigate

Generalized Hand-Object Pose Estimation with Occlusion Awareness

arXiv cs.CV / 3/20/2026

📰 NewsModels & Research

Key Points

  • GenHOI presents a generalized hand-object pose estimation framework designed to handle heavy occlusion by integrating hierarchical semantic prompts with hand priors to improve generalization to unseen objects and interactions.
  • The approach encodes object states, hand configurations, and interaction patterns through textual descriptions to learn abstract, high-level representations of hand-object interactions.
  • It employs a multi-modal masked modeling strategy over RGB images, predicted point clouds, and textual descriptions to enable robust occlusion reasoning, with hand priors serving as stable spatial references.
  • Experiments on DexYCB and HO3Dv2 benchmarks show state-of-the-art performance in hand-object pose estimation, demonstrating strong generalization under challenging occlusion conditions.

Abstract

Generalized 3D hand-object pose estimation from a single RGB image remains challenging due to the large variations in object appearances and interaction patterns, especially under heavy occlusion. We propose GenHOI, a framework for generalized hand-object pose estimation with occlusion awareness. GenHOI integrates hierarchical semantic knowledge with hand priors to enhance model generalization under challenging occlusion conditions. Specifically, we introduce a hierarchical semantic prompt that encodes object states, hand configurations, and interaction patterns via textual descriptions. This enables the model to learn abstract high-level representations of hand-object interactions for generalization to unseen objects and novel interactions while compensating for missing or ambiguous visual cues. To enable robust occlusion reasoning, we adopt a multi-modal masked modeling strategy over RGB images, predicted point clouds, and textual descriptions. Moreover, we leverage hand priors as stable spatial references to extract implicit interaction constraints. This allows reliable pose inference even under significant variations in object shapes and interaction patterns. Extensive experiments on the challenging DexYCB and HO3Dv2 benchmarks demonstrate that our method achieves state-of-the-art performance in hand-object pose estimation.