BLooP: Zero-Shot Abstractive Summarization using Large Language Models with Bigram Lookahead Promotion
arXiv cs.CL / 3/13/2026
💬 OpinionTools & Practical UsageModels & Research
Key Points
- BLooP is a training-free decoding intervention for large language models that promotes generation of source bigrams to improve abstractive summarization fidelity.
- It uses a hash-table lookup at each decoding step to encourage bigram lookahead without fine-tuning or model modification.
- Evaluations on CNN/DM, CCSum, Multi-News, and SciTLDR across models (e.g., Llama-3.1-8B-Instruct, Mistral-Nemo-Instruct-2407, Gemma-2-9b-it) show improvements in ROUGE and BARTScore, with human evaluation noting higher faithfulness without harming readability.
- The authors release the code at GitHub for easy adoption and replication.