Mix-and-Match Pruning: Globally Guided Layer-Wise Sparsification of DNNs

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Mix-and-Match Pruning,” a globally guided, layer-wise sparsification framework tailored for compressing deep neural networks for edge deployment with minimal accuracy loss.
  • It combines sensitivity scoring (e.g., magnitude, gradient, or both) with architecture-aware sparsity rules to handle the fact that different layers respond differently to pruning.
  • Mix-and-Match generates diverse, high-quality pruning configurations by deriving architecture-aware sparsity ranges (for example, keeping normalization layers while pruning classifiers more aggressively).
  • By systematically sampling these sparsity ranges, the method produces multiple pruning strategies per sensitivity signal without requiring repeated pruning runs.
  • Experiments on CNNs and Vision Transformers—including Swin-Tiny—show improved accuracy-sparsity Pareto behavior, with up to a 40% reduction in accuracy degradation versus standard single-criterion pruning.

Abstract

Deploying deep neural networks (DNNs) on edge devices requires strong compression with minimal accuracy loss. This paper introduces Mix-and-Match Pruning, a globally guided, layer-wise sparsification framework that leverages sensitivity scores and simple architectural rules to generate diverse, high-quality pruning configurations. The framework addresses a key limitation that different layers and architectures respond differently to pruning, making single-strategy approaches suboptimal. Mix-and-Match derives architecture-aware sparsity ranges, e.g., preserving normalization layers while pruning classifiers more aggressively, and systematically samples these ranges to produce ten strategies per sensitivity signal (magnitude, gradient, or their combination). This eliminates repeated pruning runs while offering deployment-ready accuracy-sparsity trade-offs. Experiments on CNNs and Vision Transformers demonstrate Pareto-optimal results, with Mix-and-Match reducing accuracy degradation on Swin-Tiny by 40% relative to standard single-criterion pruning. These findings show that coordinating existing pruning signals enables more reliable and efficient compressed models than introducing new criteria.