Evaluating Temporal Consistency in Multi-Turn Language Models

arXiv cs.CL / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper studies how multi-turn language models preserve, update, or transfer implicit time-related assumptions across dialogue turns rather than only answering single questions.
  • It introduces ChronoScope, a large diagnostic benchmark with over one million deterministically generated multi-turn question chains grounded in Wikidata to test temporal scope stability.
  • Evaluations on state-of-the-art models show frequent failures in temporal scope stability, where models drift toward present-day assumptions even when their underlying factual knowledge is correct.
  • The violations worsen as the conversation length increases and can persist even when given oracle context, indicating a gap between single-turn accuracy and consistent temporal reasoning.
  • The authors publish the dataset and evaluation suite on GitHub for public use and further research.
  • categories: ["models-research"]

Abstract

Language models are increasingly deployed in interactive settings where users reason about facts over time rather than in isolation. In such scenarios, correct behavior requires models to maintain and update implicit temporal assumptions established earlier in a conversation. We study this challenge through the lens of temporal scope stability: the ability to preserve, override, or transfer time-scoped factual context across dialogue turns. We introduce ChronoScope, a large-scale diagnostic benchmark designed to isolate temporal scope behavior in controlled multi-turn interactions, comprising over one million deterministically generated question chains grounded in Wikidata. ChronoScope evaluates whether models can correctly retain inferred temporal scope when follow-up questions omit explicit time references, spanning implicit carryover, explicit scope switching, cross-entity transfer, and longer temporal trajectories. Through extensive evaluation of state-of-the-art language models, we find that temporal scope stability is frequently violated in controlled multi-turn settings, with models often drifting toward present-day assumptions despite correct underlying knowledge. These failures intensify with interaction length and persist even under oracle context conditions, revealing a gap between single-turn factual accuracy and coherent temporal reasoning under sequential interaction. We make our dataset and evaluation suite publicly available at https://github.com/yashkumaratri/ChronoScope