A KL Lens on Quantization: Fast, Forward-Only Sensitivity for Mixed-Precision SSM-Transformer Models

arXiv cs.AI / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper targets deploying LLM-like hybrid SSM-Transformer models on edge devices by using mixed-precision/quantization while mitigating accuracy loss from uneven quantization effects across components.
  • It introduces a lightweight, surrogate-based, backpropagation-free sensitivity analysis method that uses only forward-pass metrics to rank which components are most vulnerable to quantization degradation.
  • The authors argue and formally analyze that Kullback–Leibler (KL) divergence is a better quantization-sensitivity metric for language modeling than common alternatives like MSE and SQNR.
  • Extensive experiments and ablation studies show KL-based component rankings correlate with observed performance drops and outperform other metrics, enabling more reliable mixed-precision decisions.
  • The method is validated via real-world on-device profiling on Intel Lunar Lake hardware, where KL-guided mixed-precision achieves near-FP16 perplexity with throughput and model-size tradeoffs competitive with Uniform INT4.

Abstract

Deploying Large Language Models (LLMs) on edge devices faces severe computational and memory constraints, limiting real-time processing and on-device intelligence. Hybrid architectures combining Structured State Space Models (SSMs) with transformer-based LLMs offer a balance of efficiency and performance. Aggressive quantization can drastically cut model size and speed up inference, but its uneven effects on different components require careful management. In this work, we propose a lightweight, backpropagation-free, surrogate-based sensitivity analysis framework to identify hybrid SSM-Transformer components most susceptible to quantization-induced degradation. Relying solely on forward-pass metrics, our method avoids expensive gradient computations and retraining, making it suitable for situations where access to in-domain data is limited due to proprietary restrictions or privacy constraints. We also provide a formal analysis showing that the Kullback-Leibler (KL) divergence metric better captures quantization sensitivity for Language modeling tasks than widely adopted alternatives such as mean squared error (MSE) and signal-to-quantization-noise ratio (SQNR). Through extensive experiments on SSM and hybrid architectures, our ablation studies confirm that KL-based rankings align with observed performance drops and outperform alternative metrics. This framework enables the practical deployment of advanced hybrid models on resource-constrained edge devices with minimal accuracy loss. We further validate our approach with real-world on-device profiling on Intel Lunar Lake hardware, demonstrating that KL-guided mixed-precision achieves near-FP16 perplexity with model sizes and throughput competitive with Uniform INT4 on both CPU and GPU execution modes. Code is available at https://github.com/jasonkongie/kl-ssm-quant.