Taking a Deep Breath: Enhancing Language Modeling of Large Language Models with Sentinel Tokens

arXiv cs.CL / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes inserting a special token <SR> at the end of each text chunk and adjusting the attention mask to propagate chunk-level information through the <SR> token.
  • The <SR> token enables the model to summarize and integrate semantic information from each chunk, helping it reason over long contexts.
  • The approach targets Transformer-based LLMs' degradation on long-term contexts and shows improvements in language modeling and out-of-domain downstream tasks.
  • Experiments validate the effectiveness of sentinel tokens compared with baselines.

Abstract

Large language models (LLMs) have shown promising efficacy across various tasks, becoming powerful tools in numerous aspects of human life. However, Transformer-based LLMs suffer a performance degradation when modeling long-term contexts due to they discard some information to reduce computational overhead. In this work, we propose a simple yet effective method to enable LLMs to take a deep breath, encouraging them to summarize information contained within discrete text chunks. Specifically, we segment the text into multiple chunks and insert special token at the end of each chunk. We then modify the attention mask to integrate the chunk's information into the corresponding token. This facilitates LLMs to interpret information not only from historical individual tokens but also from the token, aggregating the chunk's semantic information. Experiments on language modeling and out-of-domain downstream tasks validate the superiority of our approach.