Large Language Models and Book Summarization: Reading or Remembering, Which Is Better?
arXiv cs.CL / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Large-context LLMs with windows reaching millions of tokens can process entire books in a single prompt.
- For well-known books, LLMs can generate summaries based on internal knowledge acquired during training without reading the full text.
- The study experimentally compares summaries produced from internal memory and from the full text of the book.
- In general, having the full text yields more detailed summaries, but for some books internal-knowledge summaries perform better.
- These results raise questions about long-text summarization capabilities, since information learned during training can outperform summarizing the full text in certain cases.




