4DEquine: Disentangling Motion and Appearance for 4D Equine Reconstruction from Monocular Video
arXiv cs.CV / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- It presents 4DEquine, a framework that disentangles 4D horse reconstruction from monocular video into separate motion and appearance sub-problems to improve robustness and efficiency.
- For motion, it introduces a spatio-temporal transformer with a post-optimization stage to regress smooth, pixel-aligned pose and shape sequences from video.
- For appearance, it proposes a feed-forward network that can reconstruct a high-fidelity, animatable 3D Gaussian avatar from as few as a single image, aided by new synthetic datasets VarenPoser and VarenTex.
- It reports state-of-the-art results on real datasets APT36K and AiM using training solely on synthetic data, with comprehensive ablations validating both components, and provides a project page.
広告
Related Articles

Got My 39-Agent System Audited Live. Here's What the Maturity Scorecard Revealed.
Dev.to

The Redline Economy
Dev.to

$500 GPU outperforms Claude Sonnet on coding benchmarks
Dev.to

From Scattershot to Sniper: AI for Hyper-Personalized Media Lists
Dev.to

The LiteLLM Supply Chain Attack: A Wake-Up Call for AI Infrastructure
Dev.to