Characterizing the Expressivity of Local Attention in Transformers

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why adding local attention to transformers can improve quality despite being introduced mainly for efficiency reasons.
  • It provides a formal explanation using “recognizer expressivity,” linking fixed-precision global-attention transformers to a fragment of linear temporal logic with a single past operator.
  • The authors prove that introducing local attention adds a second temporal operator, which strictly expands the class of regular languages the model can recognize.
  • They show global and local attention are expressively complementary—neither alone subsumes the other—and that combining both gives the richest expressivity.
  • Experiments in formal language recognition and natural language modeling confirm that hybrid global-local transformers outperform global-only models.

Abstract

The transformer is the most popular neural architecture for language modeling. The cornerstone of the transformer is its global attention mechanism, which lets the model aggregate information from all preceding tokens before generating the next token. One common variant of attention is called local attention, which restricts each token to aggregating information from a bounded window of predecessors, reducing the quadratic cost of global attention to linear. Although this restriction is usually motivated by efficiency, it has also been found to improve model quality, a phenomenon that has so far lacked a satisfactory explanation. We provide a formal account of this phenomenon in terms of recognizer expressivity. It has been shown that fixed-precision transformers with global attention correspond to a fragment of linear temporal logic containing a single past operator. We additionally prove that adding local attention introduces a second temporal operator, strictly enlarging the class of recognizable regular languages. Moreover, global and local attention are expressively complementary: neither subsumes the other, and combining them yields the richest fragment. Experiments on formal language recognition and natural language modeling corroborate the theory, showing that hybrid global--local transformers outperform their global-only counterparts.