AI Navigate

Latent Speech-Text Transformer

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The Latent Speech-Text Transformer (LST) improves compute efficiency in auto-regressive speech-text models by aggregating speech tokens into latent speech patches, better aligning sequence modeling granularity between speech and text.
  • LST enhances cross-modal knowledge transfer and captures recurring acoustic patterns, leading to significant improvements in speech understanding accuracy and text performance across benchmarks.
  • Under both compute-controlled and data-controlled training settings, LST achieves up to a 6.5% absolute gain on speech tasks and scales benefits up to models with 7 billion parameters.
  • The model lowers computational cost and stabilizes ASR adaptation by reducing the effective autoregressive sequence length during ASR and TTS inference without sacrificing reconstruction quality.
  • The LST approach provides a more efficient and scalable framework for speech-text modeling, with open-source code available for the research community to build upon.

Computer Science > Computation and Language

arXiv:2510.06195 (cs)
[Submitted on 7 Oct 2025 (v1), last revised 9 Mar 2026 (this version, v2)]

Title:Latent Speech-Text Transformer

View a PDF of the paper titled Latent Speech-Text Transformer, by Yen-Ju Lu and 10 other authors
View PDF HTML (experimental)
Abstract:Auto-regressive speech-text models pre-trained on interleaved text tokens and discretized speech tokens demonstrate strong speech understanding and generation, yet remain substantially less compute-efficient than text LLMs, partly due to the much longer sequences of speech tokens relative to text. This modality imbalance disproportionately allocates pre-training and inference compute to speech, potentially hindering effective cross-modal alignment and slowing performance scaling by orders of magnitude. We introduce the Latent Speech-Text Transformer (LST), which aggregates speech tokens into latent speech patches that serve as higher-level autoregressive units. This design aligns the sequence-modeling granularity between speech and text while improving computational efficiency. The resulting patches can align with textual units to facilitate cross-modal knowledge transfer and compactly capture recurring acoustic patterns such as silence. Across story-completion benchmarks under both compute-controlled and data-controlled settings, LST consistently improves speech accuracy while also improving text performance, achieving up to +6.5% absolute gain on speech HellaSwag in compute-controlled training (+5.3% in data-controlled training). Under compute-controlled scaling from 420M to 1.8B parameters in a near compute-optimal regime, gains grow with scale, and improvements persist up to 7B parameters under fixed-token budgets. These benefits extend to downstream tasks: LST stabilizes ASR adaptation and reduces the effective autoregressive sequence length during ASR and TTS inference, lowering computational cost without degrading reconstruction quality. The code is available at this https URL.
Comments:
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)
Cite as: arXiv:2510.06195 [cs.CL]
  (or arXiv:2510.06195v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2510.06195
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yen-Ju Lu [view email]
[v1] Tue, 7 Oct 2025 17:52:08 UTC (2,993 KB)
[v2] Mon, 9 Mar 2026 19:57:30 UTC (2,969 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.