AI Navigate

Articulat3D: Reconstructing Articulated Digital Twins From Monocular Videos with Geometric and Motion Constraints

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Articulat3D introduces a framework to reconstruct articulated digital twins from monocular videos by jointly enforcing 3D geometric and motion constraints.
  • It uses Motion Prior-Driven Initialization leveraging 3D point tracks to exploit the low-dimensional structure of articulated motion and decomposes the scene into multiple rigidly-moving groups via motion bases.
  • It refines reconstructions with Geometric and Motion Constraints Refinement, employing learnable kinematic primitives parameterized by a joint axis, a pivot point, and per-frame motion scalars to ensure physical plausibility and temporal coherence.
  • The approach achieves state-of-the-art performance on synthetic benchmarks and real-world monocular videos, significantly advancing digital twin feasibility under uncontrolled real-world conditions.

Abstract

Building high-fidelity digital twins of articulated objects from visual data remains a central challenge. Existing approaches depend on multi-view captures of the object in discrete, static states, which severely constrains their real-world scalability. In this paper, we introduce Articulat3D, a novel framework that constructs such digital twins from casually captured monocular videos by jointly enforcing explicit 3D geometric and motion constraints. We first propose Motion Prior-Driven Initialization, which leverages 3D point tracks to exploit the low-dimensional structure of articulated motion. By modeling scene dynamics with a compact set of motion bases, we facilitate soft decomposition of the scene into multiple rigidly-moving groups. Building on this initialization, we introduce Geometric and Motion Constraints Refinement, which enforces physically plausible articulation through learnable kinematic primitives parameterized by a joint axis, a pivot point, and per-frame motion scalars, yielding reconstructions that are both geometrically accurate and temporally coherent. Extensive experiments demonstrate that Articulat3D achieves state-of-the-art performance on synthetic benchmarks and real-world casually captured monocular videos, significantly advancing the feasibility of digital twin creation under uncontrolled real-world conditions. Our project page is at https://maxwell-zhao.github.io/Articulat3D.