ArtHOI: Taming Foundation Models for Monocular 4D Reconstruction of Hand-Articulated-Object Interactions
arXiv cs.CV / 3/30/2026
💬 OpinionSignals & Early TrendsModels & Research
Key Points
- The paper addresses the challenge of reconstructing 4D human hand–articulated object interactions from a single monocular RGB video, where prior methods typically assume rigid objects or rely on pre-scanning/multi-view data.
- It introduces ArtHOI, an optimization-based framework that combines and refines priors from multiple foundation models to overcome inaccuracies and physical unreality in those priors.
- The approach includes Adaptive Sampling Refinement (ASR) to optimize metric scale and pose so a normalized object mesh can be grounded in world space.
- It also proposes an MLLM-guided hand-object alignment method that uses contact reasoning as constraints during hand–object mesh composition optimization.
- The work contributes two datasets (ArtHOI-RGBD and ArtHOI-Wild) and reports experiments showing robustness across varied objects and interaction types.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

The source code to Aider has just leaked after being committed to github
Reddit r/LocalLLaMA