Control-DINO: Feature Space Conditioning for Controllable Image-to-Video Diffusion

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Control-DINO proposes using self-supervised feature embeddings (e.g., DINO) as a more general conditioning signal for pretrained image-to-video diffusion models, rather than relying only on perceptual/geometric/semantic signals.
  • The approach introduces a lightweight architecture and training strategy aimed at decoupling appearance information (style/lighting) from other preserved scene features, improving controllability for tasks like stylization and relighting.
  • The paper argues that although DINO features are highly effective for reconstruction, their entangled nature can restrict generative ability, and it addresses this limitation via targeted conditioning design.
  • Experiments indicate that lower spatial resolution can be offset by higher feature dimensionality, which helps maintain or improve controllability in generative rendering from explicit spatial inputs.
  • Results are positioned as enabling more robust video domain transfer and video-from-3D generation, expanding the practical controllable use of feature-conditioned video diffusion.

Abstract

Video models have recently been applied with success to problems in content generation, novel view synthesis, and, more broadly, world simulation. Many applications in generation and transfer rely on conditioning these models, typically through perceptual, geometric, or simple semantic signals, fundamentally using them as generative renderers. At the same time, high-dimensional features obtained from large-scale self-supervised learning on images or point clouds are increasingly used as a general-purpose interface for vision models. The connection between the two has been explored for subject specific editing, aligning and training video diffusion models, but not in the role of a more general conditioning signal for pretrained video diffusion models. Features obtained through self-supervised learning like DINO, contain a lot of entangled information about style, lighting and semantics of the scene. This makes them great at reconstruction tasks but limits their generative capabilities. In this paper, we show how we can use the features for tasks such as video domain transfer and video-from-3D generation. We introduce a lightweight architecture and training strategy that decouples appearance from other features that we wish to preserve, enabling robust control for appearance changes such as stylization and relighting. Furthermore, we show that low spatial resolution can be compensated by higher feature dimensionality, improving controllability in generative rendering from explicit spatial representations.