AI Navigate

BLooP: Zero-Shot Abstractive Summarization using Large Language Models with Bigram Lookahead Promotion

arXiv cs.CL / 3/13/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • BLooP is a training-free decoding intervention for large language models that promotes generation of source bigrams to improve abstractive summarization fidelity.
  • It uses a hash-table lookup at each decoding step to encourage bigram lookahead without fine-tuning or model modification.
  • Evaluations on CNN/DM, CCSum, Multi-News, and SciTLDR across models (e.g., Llama-3.1-8B-Instruct, Mistral-Nemo-Instruct-2407, Gemma-2-9b-it) show improvements in ROUGE and BARTScore, with human evaluation noting higher faithfulness without harming readability.
  • The authors release the code at GitHub for easy adoption and replication.

Abstract

Abstractive summarization requires models to generate summaries that convey information in the source document. While large language models can generate summaries without fine-tuning, they often miss key details and include extraneous information. We propose BLooP (Bigram Lookahead Promotion), a simple training-free decoding intervention that encourages large language models (LLMs) to generate tokens that form bigrams from the source document. BLooP operates through a hash table lookup at each decoding step, requiring no training, fine-tuning, or model modification. We demonstrate improvements in ROUGE and BARTScore for Llama-3.1-8B-Instruct, Mistral-Nemo-Instruct-2407, and Gemma-2-9b-it on CNN/DM, CCSum, Multi-News, and SciTLDR. Human evaluation shows that BLooP significantly improves faithfulness without reducing readability. We make the code available at https://github.com/varuniyer/BLooP