OrbitNVS: Harnessing Video Diffusion Priors for Novel View Synthesis
arXiv cs.CV / 3/23/2026
📰 NewsModels & Research
Key Points
- OrbitNVS reframes novel view synthesis as an orbit video generation task, leveraging pre-trained video diffusion priors to generate unseen viewpoints with higher quality.
- The approach adds camera adapters to the video model to enable accurate camera control across viewpoints during synthesis.
- A normal map generation branch and attention-guided use of normal map features improve geometry consistency between views.
- Pixel-space supervision is employed to reduce blur from latent-space spatial compression, achieving stronger PSNR gains on GSO and OmniObject3D benchmarks, especially in single-view scenarios.
Related Articles
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to
[P] Prompt optimization for analog circuit placement — 97% of expert quality, zero training data
Reddit r/MachineLearning
[R] Looking for arXiv endorser (cs.AI or cs.LG)
Reddit r/MachineLearning

I curated an 'Awesome List' for Generative AI in Jewelry- papers, datasets, open-source models and tools included!
Reddit r/artificial