Prune-Quantize-Distill: An Ordered Pipeline for Efficient Neural Network Compression

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that common neural network compression proxies (e.g., parameter count or FLOPs) often fail to predict real CPU wall-clock latency, especially for unstructured sparsity due to irregular memory access and sparse-kernel overhead.
  • It proposes an ordered compression pipeline—unstructured pruning first, INT8 quantization-aware training second, and knowledge distillation last—explicitly targeting measured latency under CPU and memory constraints.
  • Experiments indicate that INT8 QAT delivers the main runtime benefit, pruning mainly improves robustness and capacity for later low-precision steps, and KD restores accuracy while keeping the deployed sparse INT8 form unchanged.
  • Across CIFAR-10/100 with ResNet-18, WRN-28-10, and VGG-16-BN, the pipeline achieves a better accuracy–size–latency trade-off than any single technique, reaching about 0.99–1.42 ms CPU latency with competitive accuracy and compact checkpoints.
  • Ordering matters: latency/accuracy outcomes from ablation studies with fixed epoch allocations show the chosen stage order generally outperforms other tested permutations, leading to a practical guideline to evaluate in joint accuracy–size–latency space using measured runtime.

Abstract

Modern deployment often requires trading accuracy for efficiency under tight CPU and memory constraints, yet common compression proxies such as parameter count or FLOPs do not reliably predict wall-clock inference time. In particular, unstructured sparsity can reduce model storage while failing to accelerate (and sometimes slightly slowing down) standard CPU execution due to irregular memory access and sparse kernel overhead. Motivated by this gap between compression and acceleration, we study a practical, ordered pipeline that targets measured latency by combining three widely used techniques: unstructured pruning, INT8 quantization-aware training (QAT), and knowledge distillation (KD). Empirically, INT8 QAT provides the dominant runtime benefit, while pruning mainly acts as a capacity-reduction pre-conditioner that improves the robustness of subsequent low-precision optimization; KD, applied last, recovers accuracy within the already constrained sparse INT8 regime without changing the deployment form. We evaluate on CIFAR-10/100 using three backbones (ResNet-18, WRN-28-10, and VGG-16-BN). Across all settings, the ordered pipeline achieves a stronger accuracy-size-latency frontier than any single technique alone, reaching 0.99-1.42 ms CPU latency with competitive accuracy and compact checkpoints. Controlled ordering ablations with a fixed 20/40/40 epoch allocation further confirm that stage order is consequential, with the proposed ordering generally performing best among the tested permutations. Overall, our results provide a simple guideline for edge deployment: evaluate compression choices in the joint accuracy-size-latency space using measured runtime, rather than proxy metrics alone.