Neural Network Pruning via QUBO Optimization

arXiv cs.CV / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper frames neural network pruning as a combinatorial optimization problem and argues that prior QUBO approaches underperformed due to oversimplified objectives (e.g., L1-norm) that miss interactions among filters.
  • It introduces a unified Hybrid QUBO formulation that uses gradient-aware sensitivity (first-order Taylor and second-order Fisher information) for the QUBO linear term and data-driven activation similarity for the quadratic term to capture both individual relevance and redundancy.
  • To ensure the desired sparsity level, the method adds a dynamic capacity-driven search that enforces target sparsity constraints without reshaping the optimization landscape.
  • The approach uses a two-stage pipeline where a Tensor-Train (TT) Refinement step (gradient-free) fine-tunes the QUBO solution against the true evaluation metric.
  • Experiments on the SIDD image denoising dataset show the Hybrid QUBO outperforms greedy Taylor pruning and traditional L1-based QUBO, with additional gains from TT Refinement at suitable pruning scales.

Abstract

Neural network pruning can be formulated as a combinatorial optimization problem, yet most existing approaches rely on greedy heuristics that ignore complex interactions between filters. Formal optimization methods such as Quadratic Unconstrained Binary Optimization (QUBO) provide a principled alternative but have so far underperformed due to oversimplified objective formulations based on metrics like the L1-norm. In this work, we propose a unified Hybrid QUBO framework that bridges heuristic importance estimation with global combinatorial optimization. Our formulation integrates gradient-aware sensitivity metrics - specifically first-order Taylor and second-order Fisher information - into the linear term, while utilizing data-driven activation similarity in the quadratic term. This allows the QUBO objective to jointly capture individual filter relevance and inter-filter functional redundancy. We further introduce a dynamic capacity-driven search to strictly enforce target sparsity without distorting the optimization landscape. Finally, we employ a two-stage pipeline featuring a Tensor-Train (TT) Refinement stage - a gradient-free optimizer that fine-tunes the QUBO-derived solution directly against the true evaluation metric. Experiments on the SIDD image denoising dataset demonstrate that the proposed Hybrid QUBO significantly outperforms both greedy Taylor pruning and traditional L1-based QUBO, with TT Refinement providing further consistent gains at appropriate combinatorial scales. This highlights the potential of hybrid combinatorial formulations for robust, scalable, and interpretable neural network compression.