AI Navigate

SENS-ASR: Semantic Embedding injection in Neural-transducer for Streaming Automatic Speech Recognition

arXiv cs.AI / 3/12/2026

💬 OpinionModels & Research

Key Points

  • SENS-ASR proposes injecting semantic information from past frame-embeddings into a streaming neural transducer to boost transcription accuracy under low-latency constraints.
  • A context module extracts semantic cues from past embeddings and is trained with knowledge distillation from a sentence-embedding language model fine-tuned on transcriptions.
  • Experiments on standard datasets show that SENS-ASR yields significant Word Error Rate improvements in small-chunk streaming scenarios.
  • The work addresses the core challenge of limited future context in streaming ASR by leveraging semantic information to compensate for context loss.

Abstract

Many Automatic Speech Recognition (ASR) applications require streaming processing of the audio data. In streaming mode, ASR systems need to start transcribing the input stream before it is complete, i.e., the systems have to process a stream of inputs with a limited (or no) future context. Compared to offline mode, this reduction of the future context degrades the performance of Streaming-ASR systems, especially while working with low-latency constraint. In this work, we present SENS-ASR, an approach to enhance the transcription quality of Streaming-ASR by reinforcing the acoustic information with semantic information. This semantic information is extracted from the available past frame-embeddings by a context module. This module is trained using knowledge distillation from a sentence embedding Language Model fine-tuned on the training dataset transcriptions. Experiments on standard datasets show that SENS-ASR significantly improves the Word Error Rate on small-chunk streaming scenarios.