Rethinking Layer Redundancy in Large Language Models: Calibration Objectives and Search for Depth Pruning

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies depth pruning for large language models by challenging the idea that “redundant layers” are an inherent structural property of pretrained networks.
  • It proposes a functional perspective in which redundancy depends on both the model and the evaluation (calibration) objective, implying that a universal layer ranking may not work across settings.
  • Experiments across three LLM families, two calibration objectives, and seven search algorithms find that different objectives lead to qualitatively different sets of redundant layers.
  • It also reports that rankings based on perplexity and those based on downstream accuracy do not consistently agree, while within a fixed objective, different search algorithms yield more similar pruning outcomes.
  • The findings suggest that designing the calibration/evaluation objective may be more influential than selecting the search algorithm, motivating further work on objective design for pruning.

Abstract

Depth pruning improves the inference efficiency of large language models by removing Transformer blocks. Prior work has focused on importance criteria and search algorithms, often treating layer redundancy as an inherent structural property of pretrained networks. In contrast, we adopt a \emph{functional perspective}, where redundancy is jointly influenced by the model and the evaluation objective, suggesting that a universal ranking may not be sufficient. Through an empirical study across three LLM families, two calibration objectives, and seven search algorithms, we observe that different objectives yield qualitatively different redundant layers, and that perplexity and downstream accuracy rankings do not consistently align. Under a fixed objective, however, search algorithms tend to produce similar solutions. Overall, our results suggest that the calibration objective may play a more influential role than the choice of search algorithm, indicating that further attention to objective design could be beneficial.