Can Video Diffusion Models Predict Past Frames? Bidirectional Cycle Consistency for Reversible Interpolation

arXiv cs.CV / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles video frame interpolation by improving temporal self-consistency, addressing failures in unidirectional generative models such as motion drift and boundary misalignment in long sequences.
  • It proposes a bidirectional, cycle-consistent training framework that enforces reversibility: forward synthesis and backward reconstruction are jointly optimized within one architecture.
  • Learnable directional tokens condition a shared backbone on temporal orientation, letting the model distinguish forward vs. backward trajectories while using unified parameters.
  • A curriculum learning strategy trains the model from short to long sequences to stabilize learning across different durations.
  • The authors report state-of-the-art results on 37-frame and 73-frame interpolation tasks with better imaging quality, motion smoothness, and dynamic control, and note that inference still uses only a single forward pass (no extra runtime cost).

Abstract

Video frame interpolation aims to synthesize realistic intermediate frames between given endpoints while adhering to specific motion semantics. While recent generative models have improved visual fidelity, they predominantly operate in a unidirectional manner, lacking mechanisms to self-verify temporal consistency. This often leads to motion drift, directional ambiguity, and boundary misalignment, especially in long-range sequences. Inspired by the principle of temporal cycle-consistency in self-supervised learning, we propose a novel bidirectional framework that enforces symmetry between forward and backward generation trajectories. Our approach introduces learnable directional tokens to explicitly condition a shared backbone on temporal orientation, enabling the model to jointly optimize forward synthesis and backward reconstruction within a single unified architecture. This cycle-consistent supervision acts as a powerful regularizer, ensuring that generated motion paths are logically reversible. Furthermore, we employ a curriculum learning strategy that progressively trains the model from short to long sequences, stabilizing dynamics across varying durations. Crucially, our cyclic constraints are applied only during training; inference requires a single forward pass, maintaining the high efficiency of the base model. Extensive experiments show that our method achieves state-of-the-art performance in imaging quality, motion smoothness, and dynamic control on both 37-frame and 73-frame tasks, outperforming strong baselines while incurring no additional computational overhead.