AI Navigate

Statistical and structural identifiability in representation learning

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper formalizes statistical identifiability and structural identifiability in representation learning, introducing near-identifiability up to an error tolerance epsilon.
  • It provides model-agnostic definitions and proves a statistical epsilon-near-identifiability result for models with nonlinear decoders, extending identifiability theory beyond last-layer representations to intermediate representations in MAEs and supervised learners.
  • ICA can resolve much of the remaining linear ambiguity for this class of models, enabling post-processing disentanglement of latent representations.
  • With additional assumptions on the data-generating process, statistical identifiability extends to structural identifiability, giving a practical recipe for disentanglement via ICA in latent spaces.
  • Empirical validation on synthetic benchmarks and a foundation-model-scale MAE for cell microscopy demonstrates state-of-the-art disentanglement and improved downstream generalization by separating biological variation from technical batch effects.

Abstract

Representation learning models exhibit a surprising stability in their internal representations. Whereas most prior work treats this stability as a single property, we formalize it as two distinct concepts: statistical identifiability (consistency of representations across runs) and structural identifiability (alignment of representations with some unobserved ground truth). Recognizing that perfect pointwise identifiability is generally unrealistic for modern representation learning models, we propose new model-agnostic definitions of statistical and structural near-identifiability of representations up to some error tolerance \epsilon. Leveraging these definitions, we prove a statistical \epsilon-near-identifiability result for the representations of models with nonlinear decoders, generalizing existing identifiability theory beyond last-layer representations in e.g. generative pre-trained transformers (GPTs) to near-identifiability of the intermediate representations of a broad class of models including (masked) autoencoders (MAEs) and supervised learners. Although these weaker assumptions confer weaker identifiability, we show that independent components analysis (ICA) can resolve much of the remaining linear ambiguity for this class of models, and validate and measure our near-identifiability claims empirically. With additional assumptions on the data-generating process, statistical identifiability extends to structural identifiability, yielding a simple and practical recipe for disentanglement: ICA post-processing of latent representations. On synthetic benchmarks, this approach achieves state-of-the-art disentanglement using a vanilla autoencoder. With a foundation model-scale MAE for cell microscopy, it disentangles biological variation from technical batch effects, substantially improving downstream generalization.