Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how LLM-generated text can be compressed under both lossless and lossy settings, identifying a “compression-compute frontier” where higher compression requires more compute.
  • In the lossless regime, domain-adapted LoRA adapters can roughly double the effectiveness of LLM-based arithmetic coding versus using the base model alone.
  • For lossy compression, the authors propose a workflow where an LLM produces a succinct rewrite followed by arithmetic coding, reaching compression ratios around 0.03—about a 2x gain over compressing the original response.
  • The study introduces “Question-Asking” (QA), an interactive protocol where a small model asks yes/no questions to a stronger model, transmitting one bit per answer, and achieving large recovery of capability gap with extremely small representation sizes (compression ratios ~0.0006 to 0.004).
  • Across 8 benchmarks (math, science, code), 10 binary questions recover roughly 23%–72% of the small-vs-large capability gap on standard tasks and 7%–38% on harder tasks, outperforming prior LLM compression by over 100x in size efficiency.

Abstract

We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compression ratios of approximately 0.03, a 2x improvement over compressing the original response. We further introduce Question-Asking compression (QA), an interactive lossy protocol inspired by the game 'Twenty Questions'. A small model iteratively refines its response by asking yes/no questions to a stronger model, transferring exactly one bit per answer. On 8 benchmarks spanning math, science, and code, 10 binary questions recover 23% to 72% of the capability gap between a small and large model on standard benchmarks and 7% to 38% on harder benchmarks, achieving compression ratios of 0.0006 to 0.004. This is over 100x smaller than prior LLM-based compression (Deletang et al., 2024), suggesting that interactive protocols can transfer knowledge far more efficiently than transmitting full responses.