Reshoot-Anything: A Self-Supervised Model for In-the-Wild Video Reshooting

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Reshoot-Anything is a new self-supervised model for re-shooting dynamic “in-the-wild” videos, targeting the lack of paired multi-view data for non-rigid scenes.
  • It scales training by generating pseudo multi-view triplets (source video, synthetic geometric anchor, target video) from internet-scale monocular footage using smooth random-walk crop trajectories.
  • The method creates the anchor by forward-warping the first source frame with a dense tracking field, simulating the distorted point-cloud inputs expected during inference.
  • By using independent cropping that causes spatial misalignment and artificial occlusions, the model learns implicit 4D spatiotemporal structure, re-projecting missing textures across time and viewpoints.
  • With a minimally adapted diffusion transformer and a 4D point-cloud anchor, the approach reports state-of-the-art results in temporal consistency, robust camera control, and high-fidelity novel view synthesis on complex dynamic scenes.

Abstract

Precise camera control for reshooting dynamic videos is bottlenecked by the severe scarcity of paired multi-view data for non-rigid scenes. We overcome this limitation with a highly scalable self-supervised framework capable of leveraging internet-scale monocular videos. Our core contribution is the generation of pseudo multi-view training triplets, consisting of a source video, a geometric anchor, and a target video. We achieve this by extracting distinct smooth random-walk crop trajectories from a single input video to serve as the source and target views. The anchor is synthetically generated by forward-warping the first frame of the source with a dense tracking field, which effectively simulates the distorted point-cloud inputs expected at inference. Because our independent cropping strategy introduces spatial misalignment and artificial occlusions, the model cannot simply copy information from the current source frame. Instead, it is forced to implicitly learn 4D spatiotemporal structures by actively routing and re-projecting missing high-fidelity textures across distinct times and viewpoints from the source video to reconstruct the target. At inference, our minimally adapted diffusion transformer utilizes a 4D point-cloud derived anchor to achieve state-of-the-art temporal consistency, robust camera control, and high-fidelity novel view synthesis on complex dynamic scenes.