ONE-SHOT: Compositional Human-Environment Video Synthesis via Spatial-Decoupled Motion Injection and Hybrid Context Integration

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ONE-SHOT, a parameter-efficient framework for compositional human–environment video synthesis that targets fine-grained, independent editing of subjects and scenes.
  • It factorizes generation into disentangled signals using a canonical-space motion injection approach with cross-attention to decouple human dynamics from environmental cues.
  • It proposes Dynamic-Grounded-RoPE, a new positional embedding method designed to create spatial correspondences across different spatial domains without relying on heuristic 3D alignment.
  • For long-horizon (minute-level) generation, it adds a Hybrid Context Integration mechanism to preserve consistency between the subject and the overall scene.
  • Experiments claim significant improvements over state-of-the-art video foundation model methods, yielding better structural control while maintaining creative diversity.

Abstract

Recent advances in Video Foundation Models (VFMs) have revolutionized human-centric video synthesis, yet fine-grained and independent editing of subjects and scenes remains a critical challenge. Recent attempts to incorporate richer environment control through rigid 3D geometric compositions often encounter a stark trade-off between precise control and generative flexibility. Furthermore, the heavy 3D pre-processing still limits practical scalability. In this paper, we propose ONE-SHOT, a parameter-efficient framework for compositional human-environment video generation. Our key insight is to factorize the generative process into disentangled signals. Specifically, we introduce a canonical-space injection mechanism that decouples human dynamics from environmental cues via cross-attention. We also propose Dynamic-Grounded-RoPE, a novel positional embedding strategy that establishes spatial correspondences between disparate spatial domains without any heuristic 3D alignments. To support long-horizon synthesis, we introduce a Hybrid Context Integration mechanism to maintain subject and scene consistency across minute-level generations. Experiments demonstrate that our method significantly outperforms state-of-the-art methods, offering superior structural control and creative diversity for video synthesis. Our project has been available on: https://martayang.github.io/ONE-SHOT/.