AI Navigate

Fundamental Limits of Neural Network Sparsification: Evidence from Catastrophic Interpretability Collapse

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines extreme neural network sparsification (up to 90% activation reduction) and its effect on mechanistic interpretability in a hybrid VAE-SAE architecture.
  • It introduces adaptive sparsity scheduling that reduces active neurons from 500 to 50 over 50 training epochs, revealing a fundamental limit where global representation quality remains stable while local interpretability collapses.
  • Experiments on dSprites and Shapes3D using Top-k and L1 sparsification show substantial dead-neuron rates at k=50 (e.g., 34.4% on dSprites and 62.7% on Shapes3D for Top-k; 41.7% and 90.6% for L1), with extended training unable to recover dead neurons.
  • The collapse scales with dataset complexity, with Shapes3D showing substantially higher dead neurons than dSprites, indicating the phenomenon is intrinsic to compression rather than an artifact of method, duration, or threshold.

Abstract

Extreme neural network sparsification (90% activation reduction) presents a critical challenge for mechanistic interpretability: understanding whether interpretable features survive aggressive compression. This work investigates feature survival under severe capacity constraints in hybrid Variational Autoencoder--Sparse Autoencoder (VAE-SAE) architectures. We introduce an adaptive sparsity scheduling framework that progressively reduces active neurons from 500 to 50 over 50 training epochs, and provide empirical evidence for fundamental limits of the sparsification-interpretability relationship. Testing across two benchmark datasets -- dSprites and Shapes3D -- with both Top-k and L1 sparsification methods, our key finding reveals a pervasive paradox: while global representation quality (measured by Mutual Information Gap) remains stable, local feature interpretability collapses systematically. Under Top-k sparsification, dead neuron rates reach 34.4\pm0.9\% on dSprites and 62.7\pm1.3\% on Shapes3D at k=50. L1 regularization -- a fundamentally different "soft constraint" paradigm -- produces equal or worse collapse: 41.7\pm4.4\% on dSprites and 90.6\pm0.5\% on Shapes3D. Extended training for 100 additional epochs fails to recover dead neurons, and the collapse pattern is robust across all tested threshold definitions. Critically, the collapse scales with dataset complexity: Shapes3D (RGB, 6 factors) shows 1.8\times more dead neurons than dSprites (grayscale, 5 factors) under Top-k and 2.2\times under L1. These findings establish that interpretability collapse under sparsification is intrinsic to the compression process rather than an artifact of any particular algorithm, training duration, or threshold choice.