AI Navigate

潜在音声-テキストトランスフォーマー

arXiv cs.CL / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 潜在音声-テキストトランスフォーマー(LST)は、音声トークンを潜在音声パッチに集約し、音声とテキスト間の系列モデル化の粒度をより良く揃えることで、自己回帰音声-テキストモデルの計算効率を向上させる。
  • LSTはクロスモーダルな知識転送を強化し、繰り返し現れる音響パターンを捉えることで、音声理解の精度および各種ベンチマークでのテキスト性能に大幅な改善をもたらす。
  • 計算リソースとデータ量の管理下での学習設定の両方で、LSTは音声タスクにおいて最大6.5%の絶対的な性能向上を達成し、70億パラメータ規模のモデルまで効果を拡大させる。
  • モデルはASRおよびTTS推論時に有効な自己回帰系列長を短縮することで計算コストを低減し、復元品質を損なうことなくASR適応の安定化を促進する。
  • LST手法は音声-テキストモデリングに対してより効率的かつスケーラブルな枠組みを提供し、研究コミュニティが活用できるオープンソースコードを公開している。

Computer Science > Computation and Language

arXiv:2510.06195 (cs)
[Submitted on 7 Oct 2025 (v1), last revised 9 Mar 2026 (this version, v2)]

Title:Latent Speech-Text Transformer

View a PDF of the paper titled Latent Speech-Text Transformer, by Yen-Ju Lu and 10 other authors
View PDF HTML (experimental)
Abstract:Auto-regressive speech-text models pre-trained on interleaved text tokens and discretized speech tokens demonstrate strong speech understanding and generation, yet remain substantially less compute-efficient than text LLMs, partly due to the much longer sequences of speech tokens relative to text. This modality imbalance disproportionately allocates pre-training and inference compute to speech, potentially hindering effective cross-modal alignment and slowing performance scaling by orders of magnitude. We introduce the Latent Speech-Text Transformer (LST), which aggregates speech tokens into latent speech patches that serve as higher-level autoregressive units. This design aligns the sequence-modeling granularity between speech and text while improving computational efficiency. The resulting patches can align with textual units to facilitate cross-modal knowledge transfer and compactly capture recurring acoustic patterns such as silence. Across story-completion benchmarks under both compute-controlled and data-controlled settings, LST consistently improves speech accuracy while also improving text performance, achieving up to +6.5% absolute gain on speech HellaSwag in compute-controlled training (+5.3% in data-controlled training). Under compute-controlled scaling from 420M to 1.8B parameters in a near compute-optimal regime, gains grow with scale, and improvements persist up to 7B parameters under fixed-token budgets. These benefits extend to downstream tasks: LST stabilizes ASR adaptation and reduces the effective autoregressive sequence length during ASR and TTS inference, lowering computational cost without degrading reconstruction quality. The code is available at this https URL.
Comments:
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Audio and Speech Processing (eess.AS)
Cite as: arXiv:2510.06195 [cs.CL]
  (or arXiv:2510.06195v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2510.06195
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yen-Ju Lu [view email]
[v1] Tue, 7 Oct 2025 17:52:08 UTC (2,993 KB)
[v2] Mon, 9 Mar 2026 19:57:30 UTC (2,969 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.