From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a train–test mismatch in contextual ASR with Speech-LLMs: models train on oracle conversation history but must rely on noisy, error-prone history at inference, which the authors call contextual exposure bias.
  • It proposes a unified robustness framework using (1) teacher-error knowledge via Whisper large-v3 hypotheses as training-time context, (2) context dropout to prevent over-reliance on history, and (3) Direct Preference Optimization (DPO) trained on curated failure cases.
  • Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent improvements when using predicted-history decoding.
  • With a two-utterance history, SFT using Whisper hypotheses reduces WER from 5.59% (oracle-history training) to 5.47%, and applying DPO further improves WER to 5.17%.
  • Under irrelevant-context attacks, DPO shows the smallest WER degradation (5.17% → 5.63%), suggesting better robustness to misleading conversational context, and the authors provide code/models publicly.

Abstract

Contextual automatic speech recognition (ASR) with Speech-LLMs is typically trained with oracle conversation history, but relies on error-prone history at inference, causing a train-test mismatch in the context channel that we term contextual exposure bias. We propose a unified training framework to improve robustness under realistic histories: (i) Teacher Error Knowledge by using Whisper large-v3 hypotheses as training-time history, (ii) Context Dropout to regularize over-reliance on history, and (iii) Direct Preference Optimization (DPO) on curated failure cases. Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent gains under predicted-history decoding. With a two-utterance history as context, SFT with Whisper hypotheses reduce WER from 5.59% (oracle-history training) to 5.47%, and DPO further improves to 5.17%. Under irrelevant-context attacks, DPO yields the smallest degradation (5.17% -> 5.63%), indicating improved robustness to misleading context. Our code and models are published on https://github.com/XYGuo1996/Contextual_Speech_LLMs.