ViewSplat: View-Adaptive Dynamic Gaussian Splatting for Feed-Forward Synthesis
arXiv cs.CV / 3/27/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- ViewSplat is a view-adaptive 3D Gaussian splatting network for novel view synthesis from unposed images that targets the fidelity gap in existing feed-forward (single-step) Gaussian splatting methods.
- Instead of regressing one fixed set of Gaussian primitives for all viewpoints, it learns a view-adaptable latent representation with dynamic MLPs that produce view-dependent residual updates to Gaussian attributes (position, scale, rotation, opacity, color).
- The approach shifts from static primitive regression to view-adaptive dynamic splatting, enabling primitives to correct initial estimation errors during rendering.
- Experiments report state-of-the-art visual fidelity while preserving fast performance, including 17 FPS inference and 154 FPS real-time rendering.
- The work is presented as an arXiv announcement and contributes a new architectural idea for improving reconstruction quality without returning to per-scene optimization.
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Sector HQ Daily AI Intelligence - March 27, 2026
Dev.to

AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to