4DEquine: Disentangling Motion and Appearance for 4D Equine Reconstruction from Monocular Video
arXiv cs.CV / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- It presents 4DEquine, a framework that disentangles 4D horse reconstruction from monocular video into separate motion and appearance sub-problems to improve robustness and efficiency.
- For motion, it introduces a spatio-temporal transformer with a post-optimization stage to regress smooth, pixel-aligned pose and shape sequences from video.
- For appearance, it proposes a feed-forward network that can reconstruct a high-fidelity, animatable 3D Gaussian avatar from as few as a single image, aided by new synthetic datasets VarenPoser and VarenTex.
- It reports state-of-the-art results on real datasets APT36K and AiM using training solely on synthetic data, with comprehensive ablations validating both components, and provides a project page.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA