Budgeted LoRA: Distillation as Structured Compute Allocation for Efficient Inference

arXiv cs.AI / 5/7/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Budgeted LoRA,” a distillation method for large language models that explicitly targets efficient inference under a fixed compute budget.
  • Unlike prior parameter-efficient distillation (e.g., LoRA) that keeps the dense backbone largely unchanged, Budgeted LoRA reallocates capacity between dense and low-rank components to reduce inference cost.
  • It uses a single global budget control that determines the final fraction of dense computation retained, combining module-level dense retention coefficients, adaptive low-rank allocation, and post-training selective dense compression.
  • Experiments show Budgeted LoRA can match standard LoRA perplexity at moderate budgets with a 1.74× compressed-module speedup, and achieve a 4.05× speedup at aggressive budgets with only moderate perplexity loss.
  • The approach also better preserves accuracy on function-style in-context learning probes, suggesting that performance depends more on how dense computation is transferred to low-rank pathways than on parameter count or perplexity alone.

Abstract

We study distillation for large language models under explicit compute constraints, with the goal of producing student models that are not only cheaper to train, but structurally efficient at inference time. While prior approaches to parameter-efficient distillation, such as LoRA, reduce adaptation cost, they leave the dense backbone unchanged and therefore fail to deliver meaningful inference savings. We propose Budgeted LoRA, a distillation framework that treats model compression as a structured compute allocation problem. Instead of using a fixed student architecture, we introduce a global compute budget that sets the final target fraction of dense computation retained. Under this constraint, the model redistributes capacity across dense and low-rank pathways via (i) module-level dense retention coefficients, (ii) adaptive low-rank allocation, and (iii) post-training compression that selectively removes, approximates, or preserves dense components. This formulation yields a family of students controlled by a single budget dial. Empirically, Budgeted LoRA matches standard LoRA perplexity at a moderate budget with a 1.74x compressed-module speedup; at an aggressive budget it achieves a 4.05x speedup with moderate perplexity degradation, and it preserves higher accuracy on function-style in-context learning probes. These results suggest that, under compute-constrained distillation, retaining behavior is less about matching perplexity or removing more parameters than it is about controlling how dense computation is transferred to low-rank pathways.