Robust Ultra Low-Bit Post-Training Quantization via Stable Diagonal Curvature Estimate

arXiv cs.LG / 4/16/2026

💬 Opinion

Key Points

  • The paper introduces DASH-Q, a robust post-training quantization (PTQ) method tailored for ultra low-bit deployment of large language models using only a small calibration set.

Abstract

Large Language Models (LLMs) are widely used across many domains, but their scale makes deployment challenging. Post-Training Quantization (PTQ) reduces memory footprint without retraining by leveraging a small calibration set. Recent Hessian-based PTQ methods compensate quantization error via cross-channel dependencies, but such approaches degrade at low bit-widths due to noisy curvature estimates from limited calibration data. We propose DASH-Q, a robust PTQ framework using diagonal Hessian approximation and iterative weighted least squares. By discarding noise-prone dependencies, DASH-Q filters sampling noise while prioritizing the preservation of salient feature power. We outperform other PTQ baselines in ultra low-bit regime, improving zero-shot accuracy by 7.01% on average and up to 14.01% over the strongest baselines across five baseline LLM models, while showing robust and stable performance with very small calibration data.