AI Navigate

How often do Answers Change? Estimating Recency Requirements in Question Answering

arXiv cs.CL / 3/18/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Large language models often rely on outdated knowledge for time-sensitive questions, leading to confident yet incorrect responses when external evidence isn’t retrieved.
  • The paper introduces a recency-stationarity taxonomy that categorizes questions by how often their answers change and whether this change frequency is context-dependent.
  • It presents RecencyQA, a dataset of 4,031 open-domain questions annotated with recency and stationarity labels, enabling fine-grained benchmarking of temporal reasoning.
  • Findings show non-stationary questions, where context changes the recency requirement, are harder for LLMs, with difficulty increasing as update frequency rises, highlighting the need for recency-aware retrieval and ranking.

Abstract

Large language models (LLMs) often rely on outdated knowledge when answering time-sensitive questions, leading to confident yet incorrect responses. Without explicit signals indicating whether up-to-date information is required, models struggle to decide when to retrieve external evidence, how to reason about stale facts, and how to rank answers by their validity. Existing benchmarks either periodically refresh answers or rely on fixed templates, but they do not reflect on how frequently answers change or whether a question inherently requires up-to-date information. To address this gap, we introduce a recency-stationarity taxonomy that categorizes questions by how often their answers change and whether this change frequency is time-invariant or context-dependent. Building on this taxonomy, we present RecencyQA, a dataset of 4,031 open-domain questions annotated with recency and stationarity labels. Through human evaluation and empirical analysis, we show that non-stationary questions, i.e., those where context changes the recency requirement, are significantly more challenging for LLMs, with difficulty increasing as update frequency rises. By explicitly modeling recency and context dependence, RecencyQA enables fine-grained benchmarking and analysis of temporal reasoning beyond binary notions of freshness, and provides a foundation for developing recency-aware and context-sensitive question answering systems.