Rethinking Intrinsic Dimension Estimation in Neural Representations

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines intrinsic dimension (ID) estimation as a way to analyze neural representations, noting that key limitations in this approach have not been adequately addressed.
  • It identifies a mismatch between theoretical assumptions and real-world practice, showing that widely used ID estimators do not reliably track the representation’s true underlying intrinsic dimension.
  • The authors also analyze what factors may explain why ID-related findings are still commonly reported in the literature despite the estimators’ shortcomings.
  • Based on these results, the paper proposes a new perspective for how intrinsic dimension should be estimated in neural representations.

Abstract

The analysis of neural representation has become an integral part of research aiming to better understand the inner workings of neural networks. While there are many different approaches to investigate neural representations, an important line of research has focused on doing so through the lens of intrinsic dimensions (IDs). Although this perspective has provided valuable insights and stimulated substantial follow-up research, important limitations of this approach have remained largely unaddressed. In this paper, we highlight a crucial discrepancy between theory and practice of IDs in neural representations, theoretically and empirically showing that common ID estimators are, in fact, not tracking the true underlying ID of the representation. We contrast this negative result with an investigation of the underlying factors that may drive commonly reported ID-related results on neural representation in the literature. Building on these insights, we offer a new perspective on ID estimation in neural representations.