Hierarchical Codec Diffusion for Video-to-Speech Generation

arXiv cs.CV / 4/20/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper addresses Video-to-Speech generation by arguing that prior methods miss the hierarchical structure of speech, from coarse speaker-aware meaning to fine-grained prosody.
  • It introduces HiCoDiT, a Hierarchical Codec Diffusion Transformer that leverages the multi-level hierarchy of RVQ-based discrete speech tokens to improve audio-visual alignment.
  • HiCoDiT uses separate low-level and high-level diffusion blocks: low-level tokens are conditioned on lip-synchronized motion and facial identity, while high-level tokens use facial expressions to shape prosodic behavior.
  • To better transfer information from coarse to fine levels, the authors propose a dual-scale adaptive instance normalization that combines channel-wise (global vocal style) and temporal-wise (local prosody dynamics) normalization.
  • Experiments reportedly show improved fidelity and expressiveness versus baselines, and the project provides code and a speech demo via the linked GitHub repository.

Abstract

Video-to-Speech (VTS) generation aims to synthesize speech from a silent video without auditory signals. However, existing VTS methods disregard the hierarchical nature of speech, which spans coarse speaker-aware semantics to fine-grained prosodic details. This oversight hinders direct alignment between visual and speech features at specific hierarchical levels during property matching. In this paper, leveraging the hierarchical structure of Residual Vector Quantization (RVQ)-based codec, we propose HiCoDiT, a novel Hierarchical Codec Diffusion Transformer that exploits the inherent hierarchy of discrete speech tokens to achieve strong audio-visual alignment. Specifically, since lower-level tokens encode coarse speaker-aware semantics and higher-level tokens capture fine-grained prosody, HiCoDiT employs low-level and high-level blocks to generate tokens at different levels. The low-level blocks condition on lip-synchronized motion and facial identity to capture speaker-aware content, while the high-level blocks use facial expression to modulate prosodic dynamics. Finally, to enable more effective coarse-to-fine conditioning, we propose a dual-scale adaptive instance layer normalization that jointly captures global vocal style through channel-wise normalization and local prosody dynamics through temporal-wise normalization. Extensive experiments demonstrate that HiCoDiT outperforms baselines in fidelity and expressiveness, highlighting the potential of discrete modelling for VTS. The code and speech demo are both available at https://github.com/Jiaxin-Ye/HiCoDiT.