Can Structural Cues Save LLMs? Evaluating Language Models in Massive Document Streams

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • StreamBench introduces a benchmark for evaluating language models in streaming environments using 605 events and 15,354 documents across three tasks (Topic Clustering, Temporal Question Answering, and Summarization).
  • The study compares model performance with and without structural cues that organize key facts by event, showing improvements in clustering (up to 4.37%) and temporal QA (up to 9.63%).
  • Structural cues help models locate relevant information and separate distinct events, addressing challenges posed by mixing multiple concurrent events in a single stream.
  • Despite gains, temporal reasoning remains a core challenge for current LLMs, indicating ongoing need for better reasoning and structure-aware methods in massive document streams.

Abstract

Evaluating language models in streaming environments is critical, yet underexplored. Existing benchmarks either focus on single complex events or provide curated inputs for each query, and do not evaluate models under the conflicts that arise when multiple concurrent events are mixed within the same document stream. We introduce StreamBench, a benchmark built from major news stories in 2016 and 2025, comprising 605 events and 15,354 documents across three tasks: Topic Clustering, Temporal Question Answering, and Summarization. To diagnose how models fail, we compare performance with and without structural cues, which organize key facts by event. We find that structural cues improve performance on clustering (up to +4.37%) and temporal QA (up to +9.63%), helping models locate relevant information and separate distinct events. While temporal reasoning remains an open challenge inherent to current LLMs, consistent gains across tasks show that structural cues are a promising direction for future work in massive document streams.