DCARL: A Divide-and-Conquer Framework for Autoregressive Long-Trajectory Video Generation

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DCARL, a divide-and-conquer autoregressive framework aimed at improving long-trajectory video generation where existing models struggle with scalability, visual drift, and controllability.
  • DCARL uses a Keyframe Generator to produce globally consistent structural anchors (trained without temporal compression) and then an Interpolation Generator to generate dense frames autoregressively with overlapping segments.
  • The interpolation stage leverages keyframes for global context while using only a single clean preceding frame to maintain local temporal coherence.
  • Trained on a large-scale internet long-trajectory video dataset, DCARL reports better results than prior autoregressive and divide-and-conquer baselines in both visual quality (lower FID/FVD) and camera adherence (lower ATE/ARE).
  • The method is demonstrated for long trajectory video synthesis up to 32 seconds with claims of stable, high-fidelity generation.

Abstract

Long-trajectory video generation is a crucial yet challenging task for world modeling primarily due to the limited scalability of existing video diffusion models (VDMs). Autoregressive models, while offering infinite rollout, suffer from visual drift and poor controllability. To address these issues, we propose DCARL, a novel divide-and-conquer, autoregressive framework that effectively combines the structural stability of the divide-and-conquer scheme with the high-fidelity generation of VDMs. Our approach first employs a dedicated Keyframe Generator trained without temporal compression to establish long-range, globally consistent structural anchors. Subsequently, an Interpolation Generator synthesizes the dense frames in an autoregressive manner with overlapping segments, utilizing the keyframes for global context and a single clean preceding frame for local coherence. Trained on a large-scale internet long trajectory video dataset, our method achieves superior performance in both visual quality (lower FID and FVD) and camera adherence (lower ATE and ARE) compared to state-of-the-art autoregressive and divide-and-conquer baselines, demonstrating stable and high-fidelity generation for long trajectory videos up to 32 seconds in length.