Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains
arXiv cs.AI / 4/6/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes how LLM-generated text can be compressed under both lossless and lossy settings, identifying a “compression-compute frontier” where higher compression requires more compute.
- In the lossless regime, domain-adapted LoRA adapters can roughly double the effectiveness of LLM-based arithmetic coding versus using the base model alone.
- For lossy compression, the authors propose a workflow where an LLM produces a succinct rewrite followed by arithmetic coding, reaching compression ratios around 0.03—about a 2x gain over compressing the original response.
- The study introduces “Question-Asking” (QA), an interactive protocol where a small model asks yes/no questions to a stronger model, transmitting one bit per answer, and achieving large recovery of capability gap with extremely small representation sizes (compression ratios ~0.0006 to 0.004).
- Across 8 benchmarks (math, science, code), 10 binary questions recover roughly 23%–72% of the small-vs-large capability gap on standard tasks and 7%–38% on harder tasks, outperforming prior LLM compression by over 100x in size efficiency.
Related Articles

Black Hat Asia
AI Business

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How I Built an AI Agent That Earns USDC While I Sleep — A Complete Guide
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to