Temporal Contrastive Decoding: A Training-Free Method for Large Audio-Language Models

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a temporal smoothing bias in unified large audio-language model decoders, where transient acoustic cues get underused in favor of temporally smooth context supported by language priors.
  • It introduces Temporal Contrastive Decoding (TCD), a training-free inference-time method that creates a slow-path by temporally blurring and re-encoding the input, then uses contrast between original and slow-path token logits to correct the bias.
  • TCD applies a token-level logit update limited to a small candidate set, with a self-normalized stability score used to choose the blur window and update scale.
  • A step-wise gating mechanism based on uncertainty and audio reliance triggers the update only when it is needed, aiming to reduce unnecessary changes.
  • Experiments on MMAU and AIR-Bench report consistent improvements on strong unified LALMs, supported by ablations and an architectural applicability analysis across model designs.

Abstract

Large audio-language models (LALMs) generalize across speech, sound, and music, but unified decoders can exhibit a \emph{temporal smoothing bias}: transient acoustic cues may be underutilized in favor of temporally smooth context that is better supported by language priors, leading to less specific audio-grounded outputs. We propose \emph{Temporal Contrastive Decoding} (TCD), a training-free decoding method for unified LALMs that mitigates this effect at inference time. TCD constructs a temporally blurred slow-path view by smoothing the input waveform and re-encoding it, then contrasts next-token logits from the original and slow-path views. The contrastive signal is applied as a token-level logit update restricted to a small candidate set. A self-normalized stability score sets the blur window and update scale, and a step-wise gate based on uncertainty and audio reliance activates the update only when needed. Experiments on MMAU and AIR-Bench show consistent improvements on strong unified LALMs. We further conduct ablations and an architectural applicability study to analyze the contributions of key components and how TCD behaves across large audio-language model designs.