DepthPilot: From Controllability to Interpretability in Colonoscopy Video Generation

arXiv cs.AI / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DepthPilot, a new interpretable framework for controllable colonoscopy video generation that aims to align outputs with physical priors and clinically faithful manifestations.
  • DepthPilot improves geometric fidelity by using a prior distribution alignment strategy that injects depth constraints into a diffusion model through parameter-efficient fine-tuning.
  • To better model complex spatio-temporal dynamics under geometric constraints, it adds an adaptive spline denoising module that replaces fixed linear weighting with learnable spline functions.
  • Experiments across multiple public datasets and internal clinical data show strong results, including FID scores below 15 and top clinician ratings, suggesting improved clinical trustworthiness.
  • The generated videos are positioned as a basis for reliable 3D reconstruction, supporting surgical navigation and blind-region identification, and potentially contributing to a colorectal world model.

Abstract

Controllable medical video generation has achieved remarkable progress, but it still lacks interpretability, which requires the alignment of generated contents with physical priors and faithful clinical manifestations. To push the boundaries from mere controllability to interpretability, we propose DepthPilot, the first interpretable framework for colonoscopy video generation. This work takes a step toward trustworthy generation through two synergistic paradigms. To achieve explicit geometric grounding, DepthPilot devises a prior distribution alignment strategy, injecting depth constraints into the diffusion backbone via parameter-efficient fine-tuning to ensure anatomical fidelity. To enhance intrinsic nonlinear modeling under these geometric constraints, DepthPilot employs an adaptive spline denoising module, replacing fixed linear weights with learnable spline functions to capture complex spatio-temporal dynamics. Extensive evaluations across three public datasets and in-house clinical data confirm DepthPilot's robust ability to produce physically consistent videos. It achieves FID scores below 15 across all benchmarks and ranks first in clinician assessments, bridging the gap between "visually realistic" and "clinically interpretable". Moreover, DepthPilot-generated videos are expected to enable reliable 3D reconstruction, facilitating surgical navigation and blind region identification, and serve as a foundation toward the colorectal world model.