From Understanding to Erasing: Towards Complete and Stable Video Object Removal

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses video object removal, emphasizing that modern diffusion-based methods struggle to eliminate object-induced artifacts like shadows, reflections, and illumination changes while keeping spatio-temporal coherence.
  • It proposes adding “understanding” to erasing via two complementary mechanisms: an external distillation scheme that transfers object-effect relationships from vision foundation models to video diffusion models.
  • It also introduces an internal framewise context cross-attention mechanism that grounds each denoising step in informative, unmasked surrounding context to better reconstruct consistent backgrounds.
  • The authors report state-of-the-art results and release what they describe as the first real-world benchmark for video object removal, alongside code, data, and models on GitHub.

Abstract

Video object removal aims to eliminate target objects from videos while plausibly completing missing regions and preserving spatio-temporal consistency. Although diffusion models have recently advanced this task, it remains challenging to remove object-induced side effects (e.g., shadows, reflections, and illumination changes) without compromising overall coherence. This limitation stems from the insufficient physical and semantic understanding of the target object and its interactions with the scene. In this paper, we propose to introduce understanding into erasing from two complementary perspectives. Externally, we introduce a distillation scheme that transfers the relationships between objects and their induced effects from vision foundation models to video diffusion models. Internally, we propose a framewise context cross-attention mechanism that grounds each denoising block in informative, unmasked context surrounding the target region. External and internal guidance jointly enable our model to understand the target object, its induced effects, and the global background context, resulting in clear and coherent object removal. Extensive experiments demonstrate our state-of-the-art performance, and we establish the first real-world benchmark for video object removal to facilitate future research and community progress. Our code, data, and models are available at: https://github.com/WeChatCV/UnderEraser.