AI Navigate

Uni-ASR: Unified LLM-Based Architecture for Non-Streaming and Streaming Automatic Speech Recognition

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Uni-ASR introduces a unified LLM-based architecture that supports both non-streaming and streaming automatic speech recognition without requiring architectural changes.
  • It presents a joint training paradigm that enables seamless switching between recognition modes, increasing deployment flexibility across latency scenarios.
  • A context-aware training paradigm and a co-designed fallback decoding strategy are proposed to boost streaming accuracy without adding latency.
  • Experimental results show competitive non-streaming performance and strong streaming effectiveness across diverse latency constraints.

Abstract

Although the deep integration of the Automatic Speech Recognition (ASR) system with Large Language Models (LLMs) has significantly improved accuracy, the deployment of such systems in low-latency streaming scenarios remains challenging. In this paper, we propose Uni-ASR, a unified framework based on LLMs that integrates both non-streaming and streaming speech recognition capabilities. We propose a joint training paradigm that enables the system to seamlessly transition between two recognition modes without any architectural modifications. Furthermore, we introduce a context-aware training paradigm and a co-designed fallback decoding strategy, which can enhance streaming recognition accuracy without introducing additional latency. The experimental results demonstrate that Uni-ASR not only achieves competitive performance within non-streaming mode, but also demonstrates strong effectiveness in streaming scenarios under diverse latency constraints.