Generating Humanless Environment Walkthroughs from Egocentric Walking Tour Videos

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles a key limitation of egocentric “walking tour” videos for environment modeling: humans (including shadows) often appear in frames and interfere with learning usable environment representations.
  • It proposes a generative inpainting approach that removes people and their associated shadow effects from walking-tour video clips in a realistic way.
  • The method relies on building a semi-synthetic training dataset of environment-only background clips paired with composite clips that overlay simulated walking humans and shadows sourced from real egocentric footage to preserve global visual diversity.
  • The authors fine-tune the state-of-the-art Casper video diffusion model for inpainting of objects and effects, showing improved qualitative and quantitative performance over Casper in dense human and complex-background scenarios.
  • They further demonstrate downstream usefulness by using the generated, humanless video clips to construct successful 3D/4D models of urban locations.

Abstract

Egocentric "walking tour" videos provide a rich source of image data to develop rich and diverse visual models of environments around the world. However, the significant presence of humans in frames of these videos due to crowds and eye-level camera perspectives mitigates their usefulness in environment modeling applications. We focus on addressing this challenge by developing a generative algorithm that can realistically remove (i.e., inpaint) humans and their associated shadow effects from walking tour videos. Key to our approach is the construction of a rich semi-synthetic dataset of video clip pairs to train this generative model. Each pair in the dataset consists of an environment-only background clip, and a composite clip of walking humans with simulated shadows overlaid on the background. We randomly sourced both foreground and background components from real egocentric walking tour videos around the world to maintain visual diversity. We then used this dataset to fine-tune the state-of-the-art Casper video diffusion model for object and effects inpainting, and demonstrate that the resulting model performs far better than Casper both qualitatively and quantitatively at removing humans from walking tour clips with significant human presence and complex backgrounds. Finally, we show that the resulting generated clips can be used to build successful 3D/4D models of urban locations.