AI Navigate

Node-RF: Learning Generalized Continuous Space-Time Scene Dynamics with Neural ODE-based NeRFs

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Node-RF combines Neural ODEs with dynamic NeRFs to provide a continuous-time, spatiotemporal representation that can extrapolate beyond observed trajectories at constant memory cost.
  • It learns an implicit scene state from visual input that evolves over time via an ODE solver and uses a NeRF-based renderer to synthesize novel views for long-range extrapolation.
  • Training on multiple motion sequences with shared dynamics enables generalization to unseen conditions without requiring explicit models for critical future points.
  • The approach overcomes the limitations of previous methods confined to observed boundaries, offering a memory-efficient, generalizable framework for dynamic scene understanding.

Abstract

Predicting scene dynamics from visual observations is challenging. Existing methods capture dynamics only within observed boundaries failing to extrapolate far beyond the training sequence. Node-RF (Neural ODE-based NeRF) overcomes this limitation by integrating Neural Ordinary Differential Equations (NODEs) with dynamic Neural Radiance Fields (NeRFs), enabling a continuous-time, spatiotemporal representation that generalizes beyond observed trajectories at constant memory cost. From visual input, Node-RF learns an implicit scene state that evolves over time via an ODE solver, propagating feature embeddings via differential calculus. A NeRF-based renderer interprets calculated embeddings to synthesize arbitrary views for long-range extrapolation. Training on multiple motion sequences with shared dynamics allows for generalization to unseen conditions. Our experiments demonstrate that Node-RF can characterize abstract system behavior without explicit model to identify critical points for future predictions.