From Interpretability to Performance: Optimizing Retrieval Heads for Long-Context Language Models

arXiv cs.CL / 4/27/2026

💬 OpinionModels & Research

Key Points

  • Mechanistic interpretability studies have highlighted retrieval heads as key to pulling information from context, but their impact on end-to-end long-context performance was previously unclear.
  • The paper proposes RetMask, which creates training signals by comparing outputs from the normal model to an ablated model where retrieval heads are masked.
  • RetMask delivers sizable improvements for long-context LLMs, including a +2.28 point gain on HELMET at 128K for Llama-3.1 and large relative gains on citation generation and passage re-ranking, while maintaining general-task performance.
  • Experiments across four models in three families show consistent long-context gains, with improvement strength correlating with how sparse the retrieval-score distribution is across heads.
  • The results support the functional importance of retrieval heads and demonstrate that mechanistic interpretability can be converted into practical performance optimization.

Abstract

Advances in mechanistic interpretability have identified special attention heads, known as retrieval heads, that are responsible for retrieving information from the context. However, the role of these retrieval heads in improving model performance remains unexplored. This work investigates whether retrieval heads can be leveraged to enhance the long-context capabilities of LLMs. Specifically, we propose RetMask, a method that generates training signals by contrasting normal model outputs with those from an ablated variant in which the retrieval heads are masked. This mechanism-based approach achieves substantial improvements: +2.28 points on HELMET at 128K for Llama-3.1, with +70% gains on generation with citation and +32% on passage re-ranking, while preserving performance on general tasks. Experiments across four models in three families demonstrate that RetMask consistently improves long-context performance, where gains correlate with the sparsity of the retrieval score distribution: models with sparser distributions, where retrieval capabilities are concentrated in a small set of heads, respond more strongly, while those with less sparse distributions show more modest gains. These results validate the functional role of retrieval heads and show that mechanistic insights can be transformed into performance enhancements.