Attention Editing: A Versatile Framework for Cross-Architecture Attention Conversion

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Attention Editing,” a framework to convert already-trained LLMs to use newer attention architectures (e.g., MLA and gated hybrid SWA) without full re-pretraining from scratch.
  • It addresses deployment constraints by avoiding overly strict structural matching between the source and target attention modules, using learnable target replacements instead.
  • Training relies on progressive distillation: layer-wise teacher-forced optimization with intermediate activation supervision to reduce cold-start error accumulation, followed by model-level distillation on next-token distributions.
  • The framework can optionally add weak feature matching regularization to improve stability and preserve performance while achieving inference efficiency gains in long-context/long-generation settings.
  • Experiments apply the method to Qwen3-8B and Qwen3-30B-A3B and include a practical training case study on Ascend 910B cluster hardware, reporting competitive performance alongside substantial efficiency improvements.

Abstract

Key-Value (KV) cache memory and bandwidth increasingly dominate large language model inference cost in long-context and long-generation regimes. Architectures such as multi-head latent attention (MLA) and hybrid sliding-window attention (SWA) can alleviate this bound, but integrating them into existing models remains difficult. Prior methods impose fine-grained structural requirements on both source and target attention modules, which cannot meet the feasible requirement in practical deployment. We present Attention Editing, a practical framework for converting already-trained large language models (LLMs) with new attention architectures without re-pretraining from scratch. Attention editing replaces the original attention with a learnable target module and trains it using progressive distillation, consisting of (1) layer-wise teacher-forced optimization with intermediate activation supervision to prevent cold-start error accumulation, and (2) model-level distillation on next-token distributions, optionally regularized by weak feature matching. We instantiate the framework on two different target--MLA and GateSWA, a gated hybrid SWA design, and apply it to Qwen3-8B and Qwen3-30B-A3B. The resulting models maintain competitive performance while delivering substantial efficiency improvements, demonstrating that large-scale attention conversion is both feasible and robust. Notably, experiments are conducted on an Ascend 910B clusters, offering a practical training case study on domestic hardware.