From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs
arXiv cs.CL / 3/26/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a train–test mismatch in contextual ASR with Speech-LLMs: models train on oracle conversation history but must rely on noisy, error-prone history at inference, which the authors call contextual exposure bias.
- It proposes a unified robustness framework using (1) teacher-error knowledge via Whisper large-v3 hypotheses as training-time context, (2) context dropout to prevent over-reliance on history, and (3) Direct Preference Optimization (DPO) trained on curated failure cases.
- Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent improvements when using predicted-history decoding.
- With a two-utterance history, SFT using Whisper hypotheses reduces WER from 5.59% (oracle-history training) to 5.47%, and applying DPO further improves WER to 5.17%.
- Under irrelevant-context attacks, DPO shows the smallest WER degradation (5.17% → 5.63%), suggesting better robustness to misleading conversational context, and the authors provide code/models publicly.
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Why I Switched from Cloud AI to a Dedicated AI Box (And Why You Should Too)
Dev.to
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to