AI Navigate

Alternating Gradient Flow Utility: A Unified Metric for Structural Pruning and Dynamic Routing in Deep Networks

arXiv cs.LG / 3/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Alternating Gradient Flow (AGF) as a decoupled kinetic paradigm and a new absolute feature-space Taylor expansion to quantify a network's structural utility for pruning and dynamic routing.
  • It shows a topological phase transition at extreme sparsity where AGF preserves baseline functionality and exhibits implicit regularization, avoiding collapse seen in training from scratch.
  • It reveals a Sparsity Bottleneck in Vision Transformers caused by gradient-magnitude decoupling, leading to gradient signals being compressed and suboptimal for real-time routing.
  • It introduces a hybrid routing framework that separates offline AGF-guided structural search from online execution via zero-cost physical priors, and demonstrates strong empirical results: 75% compression on ImageNet-1K avoiding structural collapse and Pareto-optimal efficiency on ImageNet-100 with about 50% reduction in heavy expert usage while maintaining full-model accuracy.

Abstract

Efficient deep learning traditionally relies on static heuristics like weight magnitude or activation awareness (e.g., Wanda, RIA). While successful in unstructured settings, we observe a critical limitation when applying these metrics to the structural pruning of deep vision networks. These contemporary metrics suffer from a magnitude bias, failing to preserve critical functional pathways. To overcome this, we propose a decoupled kinetic paradigm inspired by Alternating Gradient Flow (AGF), utilizing an absolute feature-space Taylor expansion to accurately capture the network's structural "kinetic utility". First, we uncover a topological phase transition at extreme sparsity, where AGF successfully preserves baseline functionality and exhibits topological implicit regularization, avoiding the collapse seen in models trained from scratch. Second, transitioning to architectures without strict structural priors, we reveal a phenomenon of Sparsity Bottleneck in Vision Transformers (ViTs). Through a gradient-magnitude decoupling analysis, we discover that dynamic signals suffer from signal compression in converged models, rendering them suboptimal for real-time routing. Finally, driven by these empirical constraints, we design a hybrid routing framework that decouples AGF-guided offline structural search from online execution via zero-cost physical priors. We validate our paradigm on large-scale benchmarks: under a 75% compression stress test on ImageNet-1K, AGF effectively avoids the structural collapse where traditional metrics aggressively fall below random sampling. Furthermore, when systematically deployed for dynamic inference on ImageNet-100, our hybrid approach achieves Pareto-optimal efficiency. It reduces the usage of the heavy expert by approximately 50% (achieving an estimated overall cost of 0.92\times) without sacrificing the full-model accuracy.