From Local to Global: Revisiting Structured Pruning Paradigms for Large Language Models

arXiv cs.CL / 4/29/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that commonly used local, task-agnostic structured pruning for LLMs often misses modest task-specific calibration signals, limiting downstream improvements despite preserving generic behavior.
  • It proposes GISP (Global Iterative Structured Pruning), a post-training method that computes first-order, loss-based importance scores aggregated at the level of attention heads and MLP channels.
  • GISP uses an iterative (not one-shot) pruning schedule to stabilize accuracy at higher sparsity and to mitigate perplexity collapse without intermediate fine-tuning.
  • The method produces nested subnetworks that enable a “prune-once, deploy-many” workflow and can directly target task-specific loss functions for easier adaptation across objectives.
  • Experiments across several open LLMs (e.g., Llama2/3, Mistral, DeepSeek, Qwen) show consistent perplexity reductions and downstream accuracy gains, with especially strong results around 40–50% sparsity.

Abstract

Structured pruning is a practical approach to deploying large language models (LLMs) efficiently, as it yields compact, hardware-friendly architectures. However, the dominant local paradigm is task-agnostic: by optimizing layer-wise reconstruction rather than task objectives, it tends to preserve perplexity or generic zero-shot behavior but fails to capitalize on modest task-specific calibration signals, often yielding limited downstream gains. We revisit global structured pruning and present GISP, Global Iterative Structured Pruning, a post-training method that removes attention heads and MLP channels using first-order, loss-based important scores aggregated at the structure level with block-wise normalization. Built on this global importance metric, GISP adopts an iterative schedule, rather than one-shot pruning, stabilizes accuracy at higher sparsity, and mitigates perplexity collapse without requiring intermediate fine-tuning. Importantly, the iterative pruning forms nested subnetworks that support a ''prune-once, deploy-many'' workflow. Furthermore, GISP defines structural importance directly with respect to a target loss, making it easy to adapt pruning to task-specific objectives. In this work, we use perplexity for language modeling and a margin-based objective for decision-style tasks. Extensive experiments show that across Llama2-7B/13B, Llama3-8B, and Mistral-0.3-7B, GISP consistently lowers WikiText-2 perplexity and improves on downstream accuracy, with especially strong gains at 40-50% sparsity; on DeepSeek-R1-Distill-Llama-3-8B and Qwen3-8B with GSM8K, task-aligned calibration substantially boosts exact-match accuracy. The implementation is available at https://github.com/uncc-efficient-ai/GISP.