Do We Need Distinct Representations for Every Speech Token? Unveiling and Exploiting Redundancy in Large Speech Language Models

arXiv cs.CL / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LSLMs over-process speech by using high-rate token representations that make sequences much longer than the underlying semantic content, driving high inference costs.
  • Using layer-wise “oracle interventions,” the authors find a redundancy hierarchy: shallow layers need fine acoustic detail, while deeper layers contain extreme redundancy that can be compressed.
  • They propose Affinity Pooling, a training-free, similarity-based token merging method that compresses speech representations at input and deep layers while preserving semantic information.
  • Experiments on three tasks show efficiency gains, including a 27.48% reduction in prefilling FLOPs with competitive accuracy, and deployment results of up to ~1.7× memory savings and ~1.1× faster time-to-first-token for long utterances.
  • The work challenges the assumption that distinct representations are required for every speech token and offers new directions for improving LSLM efficiency.

Abstract

Large Speech Language Models (LSLMs) typically operate at high token rates (tokens/s) to ensure acoustic fidelity, yet this results in sequence lengths that far exceed the underlying semantic content, incurring prohibitive inference costs. In this paper, we empirically revisit the necessity of such granular token-level processing. Through layer-wise oracle interventions, we unveil a structured redundancy hierarchy: while shallow layers encode essential acoustic details, deep layers exhibit extreme redundancy, allowing for aggressive compression. Motivated by these findings, we introduce Affinity Pooling, a training-free, similarity-based token merging mechanism. By strategically applying this method at both input and deep layers, we effectively compress speech representations without compromising semantic information. Extensive evaluations across three tasks demonstrate that our approach reduces prefilling FLOPs by 27.48\% while maintaining competitive accuracy. Practical deployment further confirms significant efficiency gains, yielding up to \sim1.7\times memory savings and \sim1.1\times faster time-to-first-token on long utterances. Our results challenge the necessity of fully distinct token representations, providing new perspectives on LSLM efficiency.