Prompt Compression in the Wild: Measuring Latency, Rate Adherence, and Quality for Faster LLM Inference

arXiv cs.CL / 4/6/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper presents a large-scale, systematic study of prompt compression for faster LLM inference, focusing on the trade-off between compression overhead and decoding latency in RAG/IR settings.
  • Using thousands of runs across open-source LLMs, 30,000 queries, and three GPU classes, the study measures end-to-end latency, rate adherence, quality, and memory usage separately for compression and decoding steps.
  • LLMLingua can deliver up to 18% end-to-end speed-ups when prompt length, compression ratio, and available hardware capacity are well matched, with statistically unchanged output quality across summarization, code generation, and question answering.
  • If model/hardware/prompt conditions fall outside the “operating window,” the compression preprocessing time dominates and negates the latency gains.
  • Effective prompt compression can also reduce memory enough to offload workloads from data-center GPUs to commodity cards with only a ~0.3s latency increase, and the released profiler helps predict break-even points for specific model–hardware setups.

Abstract

With the wide adoption of language models for IR -- and specifically RAG systems -- the latency of the underlying LLM becomes a crucial bottleneck, since the long contexts of retrieved passages lead large prompts and therefore, compute increase. Prompt compression, which reduces the size of input prompts while aiming to preserve performance on downstream tasks, has established itself as a cost-effective and low-latency method for accelerating inference in large language models. However, its usefulness depends on whether the additional preprocessing time during generation is offset by faster decoding. We present the first systematic, large-scale study of this trade-off, with thousands of runs and 30,000 queries across several open-source LLMs and three GPU classes. Our evaluation separates compression overhead from decoding latency while tracking output quality and memory usage. LLMLingua achieves up to 18% end-to-end speed-ups, when prompt length, compression ratio, and hardware capacity are well matched, with response quality remaining statistically unchanged across summarization, code generation, and question answering tasks. Outside this operating window, however, the compression step dominates and cancels out the gains. We also show that effective compression can reduce memory usage enough to offload workloads from data center GPUs to commodity cards, with only a 0.3s increase in latency. Our open-source profiler predicts the latency break-even point for each model-hardware setup, providing practical guidance on when prompt compression delivers real-world benefits.