AI Navigate

SPAR-K: Scheduled Periodic Alternating Early Exit for Spoken Language Models

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • SPAR-K is a novel modality-aware early exit framework designed to speed up inference of interleaved spoken language models (SLMs) by selectively exiting at intermediate transformer layers for speech tokens.
  • The framework uses a speech alternating-depth schedule, allowing most speech positions to exit early while performing periodic full-depth "refresh" steps to prevent distribution shifts caused by early exit.
  • Experiments on Step-Audio-2-mini and GLM-4-Voice models across datasets involving reasoning, factual QA, and dialogue tasks show that SPAR-K reduces average speech decoding depth by up to 11% with only a minor accuracy drop (max 0.82%), and minimal impact on perceptual quality metrics such as MOS and WER.
  • The study also finds that existing confidence-based early exit techniques from text LLMs do not perform well for SLMs, motivating the need for modality-specific early exit designs tailored to speech token statistics.
  • SPAR-K achieves computational efficiency improvements in spoken language generation without auxiliary overhead, making it a promising approach for real-time and resource-constrained speech applications.

Computer Science > Computation and Language

arXiv:2603.09215 (cs)
[Submitted on 10 Mar 2026]

Title:SPAR-K: Scheduled Periodic Alternating Early Exit for Spoken Language Models

View a PDF of the paper titled SPAR-K: Scheduled Periodic Alternating Early Exit for Spoken Language Models, by Hsiao-Ying Huang and 2 other authors
View PDF HTML (experimental)
Abstract:Interleaved spoken language models (SLMs) alternately generate text and speech tokens, but decoding at full transformer depth for every step becomes costly, especially due to long speech sequences. We propose SPAR-K, a modality-aware early exit framework designed to accelerate interleaved SLM inference while preserving perceptual quality. SPAR-K introduces a speech alternating-depth schedule: most speech positions exit at a fixed intermediate layer, while periodic full-depth "refresh" steps mitigate distribution shift due to early exit. We evaluate our framework using Step-Audio-2-mini and GLM-4-Voice across four datasets spanning reasoning, factual QA, and dialogue tasks, measuring performance in terms of ASR transcription accuracy and perceptual quality. Experimental results demonstrate that SPAR-K largely preserves question-answering accuracy with a maximum accuracy drop of 0.82\% while reducing average speech decoding depth by up to 11\% on Step-Audio-2-mini and 5\% on GLM-4-Voice, both with negligible changes in MOS and WER and no auxiliary computation overhead. We further demonstrate that confidence-based early exit strategies, widely used in text LLMs, are suboptimal for SLMs, highlighting that the unique statistical nature of speech tokens necessitates a specialized early exit design.
Comments:
Subjects: Computation and Language (cs.CL); Audio and Speech Processing (eess.AS)
Cite as: arXiv:2603.09215 [cs.CL]
  (or arXiv:2603.09215v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09215
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Hsiao-Ying Huang [view email]
[v1] Tue, 10 Mar 2026 05:39:03 UTC (1,387 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.