Demystifying When Pruning Works via Representation Hierarchies

arXiv cs.LG / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why network pruning reliably preserves performance on some language tasks but often breaks generative tasks, using a representation-hierarchy lens.
  • It decomposes a language model’s internal computation into embedding (hidden representations), logit (pre-softmax outputs), and probability (post-softmax distributions) spaces to localize where pruning perturbations matter.
  • The authors find embedding and logit representations are largely robust to pruning, while the nonlinear logits→probabilities step amplifies deviations and compounds across generation time steps.
  • This mechanism explains why pruning tends to work better for non-generative tasks (e.g., retrieval and multiple-choice selection) where probability-space stability is maintained.
  • The work provides disentangled guidance for choosing pruning strategies depending on the target task type and releases accompanying code.

Abstract

Network pruning, which removes less important parameters or architectures, is often expected to improve efficiency while preserving performance. However, this expectation does not consistently hold across language tasks: pruned models can perform well on non-generative tasks but frequently fail in generative settings. To understand this discrepancy, we analyze network pruning from a representation-hierarchy perspective, decomposing the internal computation of language models into three sequential spaces: embedding (hidden representations), logit (pre-softmax outputs), and probability (post-softmax distributions). We find that representations in the embedding and logit spaces are largely robust to pruning-induced perturbations. However, the nonlinear transformation from logits to probabilities amplifies these deviations, which accumulate across time steps and lead to substantial degradation during generation. In contrast, the stability of the categorical-token probability subspace, together with the robustness of the embedding space, supports the effectiveness of pruning for non-generative tasks such as retrieval and multiple-choice selection. Our analysis disentangles the effects of pruning across tasks and provides practical guidance for its application. Code is available at https://github.com/CASE-Lab-UMD/Pruning-on-Representations

Demystifying When Pruning Works via Representation Hierarchies | AI Navigate