AI Navigate

Speech Codec Probing from Semantic and Phonetic Perspectives

arXiv cs.CL / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes speech tokenizers to disentangle semantic and phonetic content, using word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA.
  • It finds that current tokenizers primarily capture phonetic information rather than lexical-semantic structure.
  • This semantic-phonetic mismatch can degrade multimodal LLM performance when semantic content is assumed to align with text-derived semantics.
  • The work outlines practical implications for designing next-generation speech tokenization methods that better encode lexical semantics and improve cross-modal alignment.

Abstract

Speech tokenizers are essential for connecting speech to large language models (LLMs) in multimodal systems. These tokenizers are expected to preserve both semantic and acoustic information for downstream understanding and generation. However, emerging evidence suggests that what is termed "semantic" in speech representations does not align with text-derived semantics: a mismatch that can degrade multimodal LLM performance. In this paper, we systematically analyze the information encoded by several widely used speech tokenizers, disentangling their semantic and phonetic content through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA. Our results show that current tokenizers primarily capture phonetic rather than lexical-semantic structure, and we derive practical implications for the design of next-generation speech tokenization methods.