MoViD: View-Invariant 3D Human Pose Estimation via Motion-View Disentanglement

arXiv cs.CV / 4/7/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • MoViD is a new framework for viewpoint-invariant 3D human pose estimation that separates viewpoint information from motion features to improve generalization to unseen camera angles.
  • It uses a dedicated view estimator (based on key joint relationships) plus an orthogonal projection module to disentangle view and motion representations, strengthened by physics-grounded contrastive alignment across views.
  • For efficiency in real-time edge deployment, MoViD uses a frame-by-frame inference pipeline with a view-aware strategy that adaptively activates flip refinement depending on the estimated viewpoint.
  • Experiments on nine public datasets and newly collected multiview UAV and gait datasets report over 24.2% lower pose error versus state-of-the-art methods, robustness under severe occlusions with 60% less training data, and real-time performance at 15 FPS on NVIDIA edge devices.

Abstract

3D human pose estimation is a key enabling technology for applications such as healthcare monitoring, human-robot collaboration, and immersive gaming, but real-world deployment remains challenged by viewpoint variations. Existing methods struggle to generalize to unseen camera viewpoints, require large amounts of training data, and suffer from high inference latency. We propose MoViD, a viewpoint-invariant 3D human pose estimation framework that disentangles viewpoint information from motion features. The key idea is to extract viewpoint information from intermediate pose features and leverage it to enhance both the robustness and efficiency of pose estimation. MoViD introduces a view estimator that models key joint relationships to predict viewpoint information, and an orthogonal projection module to disentangle motion and view features, further enhanced through physics-grounded contrastive alignment across views. For real-time edge deployment, MoViD employs a frame-by-frame inference pipeline with a view-aware strategy that adaptively activates flip refinement based on the estimated viewpoint. Evaluations on nine public datasets and newly collected multiview UAV and gait analysis datasets show that MoViD reduces pose estimation error by over 24.2\% compared to state-of-the-art methods, maintains robust performance under severe occlusions with 60\% less training data, and achieves real-time inference at 15 FPS on NVIDIA edge devices.

MoViD: View-Invariant 3D Human Pose Estimation via Motion-View Disentanglement | AI Navigate