AdapTime: Enabling Adaptive Temporal Reasoning in Large Language Models

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while large language models excel at general reasoning, their ability to reason over temporal information is still limited.
  • It criticizes existing temporal-reasoning approaches for relying on external tools, manual checks, or fixed pipelines that hurt generalization and waste computation.
  • It proposes AdapTime, an adaptive method that chooses reasoning steps dynamically based on the question’s temporal context rather than using a one-size-fits-all workflow.
  • AdapTime uses three temporal actions—reformulate, rewrite, and review—under the guidance of an LLM planner, and it is designed to work without external support.
  • The authors report that extensive experiments confirm the approach improves temporal reasoning effectiveness when integrated with state-of-the-art LLMs.

Abstract

Large language models have demonstrated strong reasoning capabilities in general knowledge question answering. However, their ability to handle temporal information remains limited. To address this limitation, existing approaches often involve external tools or manual verification and are tailored to specific scenarios, leading to poor generalizability. Moreover, these methods apply a fixed pipeline to all questions, overlooking the fact that different types of temporal questions require distinct reasoning strategies, which leads to unnecessary processing for simple cases and inadequate reasoning for complex ones. To this end, we propose AdapTime, an adaptive temporal reasoning method that dynamically executes reasoning steps based on the input context. Specifically, it involves three temporal reasoning actions: reformulate, rewrite and review, with an LLM planner guiding the reasoning process. AdapTime integrates seamlessly with state-of-the-art LLMs and significantly enhances their temporal reasoning capabilities without relying on external support. Extensive experiments demonstrate the effectiveness of our approach.