Temporal Dependencies in In-Context Learning: The Role of Induction Heads

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how LLMs perform in-context learning by showing that several open-source models exhibit a serial-recall-like bias, assigning highest probability to tokens that follow a repeated token in the input sequence (+1 lag behavior).
  • Through ablation experiments, it identifies “induction heads”—attention heads that attend to the token after a previous occurrence of the current token—as a key mechanistic driver of this temporal dependence pattern.
  • Removing attention heads with high induction scores substantially reduces the +1 lag bias, while ablating randomly selected heads does not produce the same effect.
  • The study further finds that high-induction-head ablation more strongly degrades few-shot prompted serial-recall performance than random-head ablation.
  • Overall, the results provide a mechanistically specific link between induction heads and ordered temporal context retrieval in transformer-based in-context learning.

Abstract

Large language models (LLMs) exhibit strong in-context learning capabilities, but how they track and retrieve information from context remains underexplored. Drawing on the free recall paradigm in cognitive science (where participants recall list items in any order), we show that several open-source LLMs consistently display a serial-recall-like pattern, assigning peak probability to tokens that immediately follow a repeated token in the input sequence. Through systematic ablation experiments, we show that induction heads, specialized attention heads that attend to the token following a previous occurrence of the current token, play an important role in this phenomenon. Removing heads with a high induction score substantially reduces the +1 lag bias, whereas ablating random heads does not reproduce the same reduction. We also show that removing heads with high induction scores impairs the performance of models prompted to do serial recall using few-shot learning to a larger extent than removing random heads. Our findings highlight a mechanistically specific connection between induction heads and temporal context processing in transformers, suggesting that these heads are especially important for ordered retrieval and serial-recall-like behavior during in-context learning.