AI Navigate

Self-Supervised Speech Models Encode Phonetic Context via Position-dependent Orthogonal Subspaces

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how a single frame-level S3M representation encodes phonetic context by showing that vectors corresponding to previous, current, and next phones are superposed within one frame.
  • It extends prior work by demonstrating that phonological information from a sequence is encoded compositionally in a frame, not just for isolated phones but for surrounding context.
  • The study reveals orthogonality between relative positions (previous, current, next) and the emergence of implicit phonetic boundaries within frame representations.
  • These results advance our understanding of context-dependent representations in transformer-based self-supervised speech models and may inform future modeling and evaluation of ASR systems.

Abstract

Transformer-based self-supervised speech models (S3Ms) are often described as contextualized, yet what this entails remains unclear. Here, we focus on how a single frame-level S3M representation can encode phones and their surrounding context. Prior work has shown that S3Ms represent phones compositionally; for example, phonological vectors such as voicing, bilabiality, and nasality vectors are superposed in the S3M representation of [m]. We extend this view by proposing that phonological information from a sequence of neighboring phones is also compositionally encoded in a single frame, such that vectors corresponding to previous, current, and next phones are superposed within a single frame-level representation. We show that this structure has several properties, including orthogonality between relative positions, and emergence of implicit phonetic boundaries. Together, our findings advance our understanding of context-dependent S3M representations.