LayerBoost: Layer-Aware Attention Reduction for Efficient LLMs

arXiv cs.LG / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces LayerBoost, a layer-aware method to reduce attention compute in transformers by selectively changing attention mechanisms per layer rather than applying one replacement uniformly.
  • It uses a sensitivity analysis on a pretrained model to classify layers as highly sensitive (keep standard softmax), moderately sensitive (switch to linear sliding-window attention), or low sensitivity (remove attention entirely).
  • After modifying the architecture, the authors recover quality using a lightweight distillation “healing” phase that needs only 10M additional training tokens.
  • LayerBoost improves inference latency and throughput by up to 68% under high concurrency while maintaining competitive benchmark performance and outperforming prior attention linearization approaches.
  • The approach is positioned as especially useful for high-concurrency inference serving and deployments constrained by cost and memory footprint.

Abstract

Transformers are mostly relying on softmax attention, which introduces quadratic complexity with respect to sequence length and remains a major bottleneck for efficient inference. Prior work on linear or hybrid attention typically replaces softmax attention uniformly across all layers, often leading to significant performance degradation or requiring extensive retraining to recover model quality. This work proposes LayerBoost, a layer-aware attention reduction method that selectively modifies the attention mechanism based on the sensitivity of individual transformer layers. It first performs a systematic sensitivity analysis on a pretrained model to identify layers that are critical for maintaining performance. Guided by this analysis, three distinct strategies can be applied: retaining standard softmax attention in highly sensitive layers, replacing it with linear sliding window attention in moderately sensitive layers, and removing attention entirely in layers that exhibit low sensitivity. To recover performance after these architectural modifications, we introduce a lightweight distillation-based healing phase requiring only 10M additional training tokens. LayerBoost reduces inference latency and improves throughput by up to 68% at high concurrency, while maintaining competitive model quality. It matches base model performance on several benchmarks, exhibits only minor degradations on others, and significantly outperforms state-of-the-art attention linearization methods. These efficiency gains make our method particularly well-suited for high-concurrency serving and hardware-constrained deployment scenarios, where inference cost and memory footprint are critical bottlenecks.