DiffVC: A Non-autoregressive Framework Based on Diffusion Model for Video Captioning

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • DiffVC proposes a non-autoregressive video captioning framework that uses a diffusion model to overcome the slow generation speed and cumulative error typical of autoregressive encoder-decoder approaches.
  • The method encodes videos into visual representations, injects Gaussian noise into the ground-truth text during training, and uses a discriminative conditional denoiser constrained by the visual features to generate new text representations.
  • During inference, DiffVC generates captions by sampling noise from a Gaussian distribution and avoiding token-by-token autoregressive decoding, enabling parallel generation.
  • Experiments on MSVD, MSR-VTT, and VATEX indicate improved caption quality versus prior non-autoregressive methods, reaching up to +9.9 CIDEr and +2.6 B@4 while matching autoregressive performance and improving speed.
  • The authors state that the source code will be available soon, which may accelerate adoption and further benchmarking of diffusion-based non-autoregressive captioning.

Abstract

Current video captioning methods usually use an encoder-decoder structure to generate text autoregressively. However, autoregressive methods have inherent limitations such as slow generation speed and large cumulative error. Furthermore, the few non-autoregressive counterparts suffer from deficiencies in generation quality due to the lack of sufficient multimodal interaction modeling. Therefore, we propose a non-autoregressive framework based on Diffusion model for Video Captioning (DiffVC) to address these issues. Its parallel decoding can effectively solve the problems of generation speed and cumulative error. At the same time, our proposed discriminative conditional Diffusion Model can generate higher-quality textual descriptions. Specifically, we first encode the video into a visual representation. During training, Gaussian noise is added to the textual representation of the ground-truth caption. Then, a new textual representation is generated via the discriminative denoiser with the visual representation as a conditional constraint. Finally, we input the new textual representation into a non-autoregressive language model to generate captions. During inference, we directly sample noise from the Gaussian distribution for generation. Experiments on MSVD, MSR-VTT, and VATEX show that our method can outperform previous non-autoregressive methods and achieve comparable performance to autoregressive methods, e.g., it achieved a maximum improvement of 9.9 on the CIDEr and improvement of 2.6 on the B@4, while having faster generation speed. The source code will be available soon.