Compression Method Matters: Benchmark-Dependent Output Dynamics in LLM Prompt Compression

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study argues that prompt compression should not be judged only by input-token reduction, because compression can change output length and total inference cost in benchmark-dependent ways.
  • Using 5,400 API calls across three benchmarks and multiple providers under aggressive compression (r=0.3), it finds that DeepSeek shows extreme output expansion on MBPP (56x, low instruction survival probability) but much less on HumanEval (5x, higher survival probability), while GPT-4o-mini is comparatively stable.
  • The authors introduce instruction survival probability (Ψ) as a structural metric to explain conflicting prior findings, showing that prompt structure and truncation effects matter more than provider identity alone.
  • They propose the Compression Robustness Index (CRI) to enable safer cross-benchmark evaluation, warning that single-benchmark tests can mislead conclusions about “compression safety” and efficiency.
  • Companion NVML-based energy measurements suggest that input token savings may overstate real joule (energy) savings, motivating benchmark-diverse and structure-aware compression policies for deployment.

Abstract

Prompt compression is often evaluated by input-token reduction, but its real deployment impact depends on how compression changes output length and total inference cost. We present a controlled replication and extension study of benchmark-dependent output dynamics under aggressive compression, covering 5,400 API calls across three benchmarks and multiple providers. To explain conflicting prior observations, we formalize instruction survival probability (Psi), a structural metric that captures whether task-critical prompt segments remain after truncation. Results show a strong benchmark effect: under r=0.3, DeepSeek exhibits severe output expansion on MBPP (56x, Psi approx 0.15) but substantially lower expansion on HumanEval (5x, Psi approx 0.72), while GPT-4o-mini is comparatively stable across benchmarks. This reconciles the apparent discrepancy between previously reported extreme explosion and lower replication effects by identifying prompt structure, not provider identity alone, as the primary moderator. We introduce the Compression Robustness Index (CRI) for cross-benchmark evaluation and show that single-benchmark assessments can produce misleading conclusions about compression safety and efficiency. To contextualize energy claims, we incorporate companion direct NVML measurements from rented RunPod GPUs and show that token savings can overstate joule savings. These findings motivate benchmark-diverse testing and structure-aware compression policies for reliable, energy-conscious LLM deployment.