MoCapAnything V2: End-to-End Motion Capture for Arbitrary Skeletons

arXiv cs.CV / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • MoCapAnything V2 proposes the first fully end-to-end motion capture framework for arbitrary skeletons, replacing a factorized video-to-pose plus non-differentiable IK pipeline with jointly learned stages.
  • The work identifies that pose-to-rotation ambiguity comes from missing coordinate system information, since identical joint positions can imply different rotations depending on rest poses and local axis conventions.
  • To resolve this, the method introduces a reference pose–rotation pair from the target asset to anchor both the rotation mapping and the rotation coordinate system, turning rotation prediction into a well-constrained conditional learning problem.
  • It predicts joint positions directly from video (without mesh intermediates) and uses a shared skeleton-aware Global-Local Graph-guided Multi-Head Attention module for coordinated global and local joint reasoning.
  • Experiments report improved accuracy (rotation error dropping from about 17° to ~10°, and to 6.54° on unseen skeletons) and substantially faster inference (around 20× faster than mesh-based pipelines).

Abstract

Recent methods for arbitrary-skeleton motion capture from monocular video follow a factorized pipeline, where a Video-to-Pose network predicts joint positions and an analytical inverse-kinematics (IK) stage recovers joint rotations. While effective, this design is inherently limited, since joint positions do not fully determine rotations and leave degrees of freedom such as bone-axis twist ambiguous, and the non-differentiable IK stage prevents the system from adapting to noisy predictions or optimizing for the final animation objective. In this work, we present the first fully end-to-end framework in which both Video-to-Pose and Pose-to-Rotation are learnable and jointly optimized. We observe that the ambiguity in pose-to-rotation mapping arises from missing coordinate system information: the same joint positions can correspond to different rotations under different rest poses and local axis conventions. To resolve this, we introduce a reference pose-rotation pair from the target asset, which, together with the rest pose, not only anchors the mapping but also defines the underlying rotation coordinate system. This formulation turns rotation prediction into a well-constrained conditional problem and enables effective learning. In addition, our model predicts joint positions directly from video without relying on mesh intermediates, improving both robustness and efficiency. Both stages share a skeleton-aware Global-Local Graph-guided Multi-Head Attention (GL-GMHA) module for joint-level local reasoning and global coordination. Experiments on Truebones Zoo and Objaverse show that our method reduces rotation error from ~17 degrees to ~10 degrees, and to 6.54 degrees on unseen skeletons, while achieving ~20x faster inference than mesh-based pipelines. Project page: https://animotionlab.github.io/MoCapAnythingV2/