Beyond Decodability: Reconstructing Language Model Representations with an Encoding Probe

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an “Encoding Probe” that reconstructs a model’s internal representations using interpretable features, addressing limitations of standard decoding probes.
  • Unlike typical probing, the method enables more direct comparison of how different features contribute and mitigates confounds from correlated features.
  • Experiments on text and speech transformer models evaluate feature sets spanning acoustics, phonetics, syntax, lexicon, and speaker identity.
  • Findings indicate speaker-related effects differ substantially across training objectives and datasets, while syntactic and lexical features each contribute independently to reconstruction.
  • Overall, the Encoding Probe offers a complementary approach to interpreting language model representations beyond simple decodability.

Abstract

Probing is widely used to study which features can be decoded from language model representations. However, the common decoding probe approach has two limitations that we aim to solve with our new encoding probe approach: contributions of different features to model representations cannot be directly compared, and feature correlations can affect probing results. We present an Encoding Probe that reverses this direction and reconstructs internal representations of models using interpretable features. We evaluate this method on text and speech transformer models, using feature sets spanning acoustics, phonetics, syntax, lexicon, and speaker identity. Our results suggest that speaker-related effects vary strongly across different training objectives and datasets, while syntactic and lexical features contribute independently to reconstruction. These results show that the Encoding Probe provides a complementary perspective on interpreting model representations beyond decodability.