AI Navigate

Triple X: A LLM-Based Multilingual Speech Recognition System for the INTERSPEECH2025 MLC-SLM Challenge

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The Triple X system uses an encoder-adapter-LLM architecture to tackle multilingual conversational speech recognition in the MLC-SLM Challenge Task 1.
  • It combines the reasoning capabilities of text-based large language models with domain-specific adaptations and a carefully designed multi-stage training pipeline over large multilingual audio datasets.
  • Experimental results show competitive Word Error Rate (WER) on both development and test sets, with the approach achieving second place in the challenge.
  • The work highlights the viability of integrating encoder-adapter frameworks with LLMs to improve multilingual ASR performance and suggests avenues for further improvement.
  • By sharing architecture and training strategies, the paper contributes a practical blueprint for researchers aiming to leverage multilingual data and LLMs in speech recognition.

Abstract

This paper describes our Triple X speech recognition system submitted to Task 1 of the Multi-Lingual Conversational Speech Language Modeling (MLC-SLM) Challenge. Our work focuses on optimizing speech recognition accuracy in multilingual conversational scenarios through an innovative encoder-adapter-LLM architecture. This framework harnesses the powerful reasoning capabilities of text-based large language models while incorporating domain-specific adaptations. To further enhance multilingual recognition performance, we adopted a meticulously designed multi-stage training strategy leveraging extensive multilingual audio datasets. Experimental results demonstrate that our approach achieves competitive Word Error Rate (WER) performance on both dev and test sets, obtaining second place in the challenge ranking.