Beyond Facts: Benchmarking Distributional Reading Comprehension in Large Language Models

arXiv cs.CL / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Text2DistBench, a benchmark designed to test large language models’ ability to answer distributional reading comprehension questions rather than only factual, evidence-localized queries.
  • Text2DistBench is built from real-world YouTube comments about movie and music entities, providing models with entity metadata plus associated comments and asking them to infer population-level trends (e.g., sentiment proportions, most/second-most frequent topics).
  • The benchmark’s data construction pipeline is fully automated and continuously updated to add newly emerging entities over time, supporting reliable and longitudinal evaluation.
  • Experiments across multiple LLMs show that models beat random baselines but performance varies significantly by distribution type and characteristics, revealing both strengths and limitations.
  • The authors position Text2DistBench as a scalable testbed for future research focused on distributional knowledge inference in LLMs.

Abstract

While most reading comprehension benchmarks for LLMs focus on factual information that can be answered by localizing specific textual evidence, many real-world tasks require understanding distributional information, such as population-level trends and preferences expressed across collections of text. We introduce Text2DistBench, a reading comprehension benchmark for evaluating LLMs' ability to infer distributional knowledge from natural language. Built from real-world YouTube comments about movie and music entities, the benchmark provides models with entity metadata and associated comments, and requires them to answer distributional questions, such as estimating the proportions of positive and negative comments, or identifying the most and second most frequent topics discussed among viewers. To support reliable and long-term evaluation, the construction pipeline of Text2DistBench is fully automated and continuously updated to incorporate newly emerging entities over time. Experiments across multiple LLMs show that while models substantially outperform random baselines, performance varies widely across different distribution types and characteristics. These findings highlight both the capabilities and limitations of current LLMs in distributional reading comprehension and demonstrate the value of Text2DistBench as a practical and scalable testbed for future research.