BEAVER: A Training-Free Hierarchical Prompt Compression Method via Structure-Aware Page Selection

arXiv cs.CL / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • BEAVER introduces a training-free hierarchical prompt compression method that for long-context LLMs shifts from token pruning to structure-aware page-level selection to reduce inference latency while preserving information fidelity.
  • The approach maps variable-length contexts into dense page-level tensors using dual-path pooling to maximize hardware parallelism during inference.
  • A hybrid planner combines semantic and lexical dual-branch selection with sentence smoothing to maintain discourse integrity across long documents.
  • Empirical evaluations on four long-context benchmarks show BEAVER achieving comparable performance to state-of-the-art methods (e.g., LongLLM Lingua) with a notable 26.4x reduction in latency at 128k context sizes, and strong fidelity in multi-needle retrieval on the RULER benchmark.
  • The authors provide their code at the stated URL, enabling practical adoption of the method.

Abstract

The exponential expansion of context windows in LLMs has unlocked capabilities for long-document understanding but introduced severe bottlenecks in inference latency and information utilization. Existing compression methods often suffer from high training costs or semantic fragmentation due to aggressive token pruning. In this paper, we propose BEAVER, a novel training-free framework that shifts compression from linear token removal to structure-aware hierarchical selection. BEAVER maximizes hardware parallelism by mapping variable-length contexts into dense page-level tensors via dual-path pooling, and preserves discourse integrity through a hybrid planner combining semantic and lexical dual-branch selection with sentence smoothing. Extensive evaluations on four long-context benchmarks demonstrate that BEAVER achieves comparable performance to state-of-the-art (SOTA) methods like LongLLMLingua. Notably, on the RULER benchmark, BEAVER maintains high fidelity in multi-needle retrieval where baselines deteriorate. Regarding efficiency, BEAVER reduces latency by 26.4x on 128k contexts, offering a scalable solution for high-throughput applications. Our code is available at https://cslikai.cn/BEAVER/.