FluSplat: Sparse-View 3D Editing without Test-Time Optimization
arXiv cs.CV / 4/23/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The FluSplat paper introduces a feed-forward approach for cross-view consistent 3D scene editing starting from sparse views.
- Instead of running costly test-time optimization that alternates between 2D diffusion editing and 3D reconstruction, it uses cross-view regularization in the image domain during training.
- Multi-view edits are jointly supervised with geometric alignment constraints so the method can produce view-consistent results without per-scene inference-time refinement.
- Edited views are then lifted into 3D using a feed-forward 3D Gaussian Splatting (3DGS) model in a single forward pass, yielding a coherent 3DGS representation.
- Experiments report competitive editing quality, significantly better cross-view consistency than optimization-based pipelines, and inference time reductions by orders of magnitude.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Elevating Austria: Google invests in its first data center in the Alps.
Google Blog

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

AI Tutor That Works Offline — Study Anywhere with EaseLearn AI
Dev.to