ArtHOI: Taming Foundation Models for Monocular 4D Reconstruction of Hand-Articulated-Object Interactions

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The paper addresses the challenge of reconstructing 4D human hand–articulated object interactions from a single monocular RGB video, where prior methods typically assume rigid objects or rely on pre-scanning/multi-view data.
  • It introduces ArtHOI, an optimization-based framework that combines and refines priors from multiple foundation models to overcome inaccuracies and physical unreality in those priors.
  • The approach includes Adaptive Sampling Refinement (ASR) to optimize metric scale and pose so a normalized object mesh can be grounded in world space.
  • It also proposes an MLLM-guided hand-object alignment method that uses contact reasoning as constraints during hand–object mesh composition optimization.
  • The work contributes two datasets (ArtHOI-RGBD and ArtHOI-Wild) and reports experiments showing robustness across varied objects and interaction types.

Abstract

Existing hand-object interactions (HOI) methods are largely limited to rigid objects, while 4D reconstruction methods of articulated objects generally require pre-scanning the object or even multi-view videos. It remains an unexplored but significant challenge to reconstruct 4D human-articulated-object interactions from a single monocular RGB video. Fortunately, recent advancements in foundation models present a new opportunity to address this highly ill-posed problem. To this end, we introduce ArtHOI, an optimization-based framework that integrates and refines priors from multiple foundation models. Our key contribution is a suite of novel methodologies designed to resolve the inherent inaccuracies and physical unreality of these priors. In particular, we introduce an Adaptive Sampling Refinement (ASR) method to optimize object's metric scale and pose for grounding its normalized mesh in world space. Furthermore, we propose a Multimodal Large Language Model (MLLM) guided hand-object alignment method, utilizing contact reasoning information as constraints of hand-object mesh composition optimization. To facilitate a comprehensive evaluation, we also contribute two new datasets, ArtHOI-RGBD and ArtHOI-Wild. Extensive experiments validate the robustness and effectiveness of our ArtHOI across diverse objects and interactions. Project: https://arthoi-reconstruction.github.io.