D-QRELO: Training- and Data-Free Delta Compression for Large Language Models via Quantization and Residual Low-Rank Approximation

arXiv cs.LG / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses the memory overhead caused by distributing many supervised fine-tuned (SFT) variants of the same large language model by using delta compression that stores only compressed delta weights.
  • It argues that existing delta compression methods degrade for large-scale SFT data, because increasing data scale enlarges delta parameter magnitudes and related spectral/entropy measures, leading to larger compression errors.
  • The authors propose DQRELO, a training- and data-free approach that first applies coarse one-bit quantization to model the dominant delta structure and then reconstructs finer details via compensated residual low-rank approximation.
  • Experiments across multiple LLMs (including dense and mixture-of-experts architectures) and across domains show DQRELO outperforms prior methods under the difficult large-delta setting.
  • The study also derives practical design principles indicating how task difficulty, model architecture, and layer location produce predictable compression patterns that can inform production deployment strategies.

Abstract

Supervised Fine-Tuning (SFT) accelerates taskspecific large language models (LLMs) development, but the resulting proliferation of finetuned models incurs substantial memory overhead. Delta compression addresses this by retaining a single pre-trained LLM with multiple compressed delta weights. However, existing methods fail on models fine-tuned with largescale datasets. We find that larger SFT data scale amplifies delta parameter magnitude, singular values, and entropy, exacerbating compression errors. To tackle this, we propose DQRELO (Delta Compression via Quantization and Residual Low-Rank), a novel training- and data-free delta compression method. It combines coarse-grained one-bit quantization to capture the dominant structure of the delta, followed by compensated residual low-rank approximation to recover fine-grained details from the smaller residual error. Experiments on various LLMs spanning dense and MoE architectures across multiple domains under this challenging setting demonstrate that DQRELO outperforms existing methods. Moreover, we establish key design principles for delta compression through extensive empirical analysis, demonstrating how task difficulty, architecture, and layer positioning create predictable patterns that can guide optimal compression strategies in production systems.