MoshiRAG: Asynchronous Knowledge Retrieval for Full-Duplex Speech Language Models

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MoshiRAG, a modular retrieval-augmented approach aimed at improving factuality in full-duplex speech-to-speech language models without relying on costly model scaling.
  • It uses an asynchronous framework that triggers knowledge retrieval for knowledge-demanding queries and exploits the natural timing gap during conversation to complete retrieval without disrupting turn-taking.
  • MoshiRAG combines a compact full-duplex interface with selective retrieval from stronger external knowledge sources to maintain real-time interactivity (pauses, interruptions, backchannels).
  • The authors report factuality comparable to leading publicly released non-duplex speech language models while preserving full-duplex responsiveness.
  • The design is claimed to be plug-and-play, allowing different retrieval methods to be swapped in without retraining, with additional strong results on out-of-domain mathematical reasoning tasks.

Abstract

Speech-to-speech language models have recently emerged to enhance the naturalness of conversational AI. In particular, full-duplex models are distinguished by their real-time interactivity, including handling of pauses, interruptions, and backchannels. However, improving their factuality remains an open challenge. While scaling the model size could address this gap, it would make real-time inference prohibitively expensive. In this work, we propose MoshiRAG, a modular approach that combines a compact full-duplex interface with selective retrieval to access more powerful knowledge sources. Our asynchronous framework enables the model to identify knowledge-demanding queries and ground its responses in external information. By leveraging the natural temporal gap between response onset and the delivery of core information, the retrieval process can be completed while maintaining a natural conversation flow. With this approach, MoshiRAG achieves factuality comparable to the best publicly released non-duplex speech language models while preserving the interactivity inherent to full-duplex systems. Moreover, our flexible design supports plug-and-play retrieval methods without retraining and demonstrates strong performance on out-of-domain mathematical reasoning tasks.