GRVS: a Generalizable and Recurrent Approach to Monocular Dynamic View Synthesis

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses monocular dynamic view synthesis by targeting failures of scene-specific 4D optimization methods in highly dynamic regions and of diffusion-based camera-control methods in producing geometric consistency.
  • It proposes a new generalizable recurrent framework with (1) a recurrent loop for unbounded/asynchronous mapping between input and target videos and (2) an efficient dynamic plane-sweep mechanism to disentangle camera motion from scene motion.
  • The method aims to support fine-grained six-degrees-of-freedom camera control while maintaining consistency across both static and highly dynamic areas.
  • The authors train and evaluate on UCSD and introduce/evaluate on Kubric-4D-dyn, a newer monocular dynamic dataset with longer, higher-resolution, more complex sequences.
  • Reported results show improved reconstruction of fine-grained geometric details over four Gaussian Splatting-based scene-specific baselines and two diffusion-based approaches.

Abstract

Synthesizing novel views from monocular videos of dynamic scenes remains a challenging problem. Scene-specific methods that optimize 4D representations with explicit motion priors often break down in highly dynamic regions where multi-view information is hard to exploit. Diffusion-based approaches that integrate camera control into large pre-trained models can produce visually plausible videos but frequently suffer from geometric inconsistencies across both static and dynamic areas. Both families of methods also require substantial computational resources. Building on the success of generalizable models for static novel view synthesis, we adapt the framework to dynamic inputs and propose a new model with two key components: (1) a recurrent loop that enables unbounded and asynchronous mapping between input and target videos and (2) an efficient use of plane sweeps over dynamic inputs to disentangle camera and scene motion, and achieve fine-grained, six-degrees-of-freedom camera controls. We train and evaluate our model on the UCSD dataset and on Kubric-4D-dyn, a new monocular dynamic dataset featuring longer, higher resolution sequences with more complex scene dynamics than existing alternatives. Our model outperforms four Gaussian Splatting-based scene-specific approaches, as well as two diffusion-based approaches in reconstructing fine-grained geometric details across both static and dynamic regions.