TimeSeriesExamAgent: Creating Time Series Reasoning Benchmarks at Scale

arXiv cs.AI / 4/14/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper questions whether LLMs genuinely understand time series data beyond superficial pattern matching, noting that existing benchmarks are often manually curated and narrowly scoped.
  • It introduces TimeSeriesExam, a multiple-choice benchmark built on synthetic time series and organized into five reasoning categories: pattern recognition, noise understanding, similarity analysis, anomaly detection, and causality.
  • It proposes TimeSeriesExamAgent to scale benchmark creation by automatically generating exam-like tasks from real-world datasets across healthcare, finance, and weather.
  • The authors report that the automatically generated benchmarks achieve diversity comparable to manually curated ones based on multi-dimensional quality evaluation.
  • Experimental results suggest LLM performance is still limited for both abstract time-series reasoning and domain-specific applications, indicating continuing gaps in time series understanding.

Abstract

Large Language Models (LLMs) have shown promising performance in time series modeling tasks, but do they truly understand time series data? While multiple benchmarks have been proposed to answer this fundamental question, most are manually curated and focus on narrow domains or specific skill sets. To address this limitation, we propose scalable methods for creating comprehensive time series reasoning benchmarks that combine the flexibility of templates with the creativity of LLM agents. We first develop TimeSeriesExam, a multiple-choice benchmark using synthetic time series to evaluate LLMs across five core reasoning categories: pattern recognitionnoise understandingsimilarity analysisanomaly detection, and causality. Then, with TimeSeriesExamAgent, we scale our approach by automatically generating benchmarks from real-world datasets spanning healthcare, finance and weather domains. Through multi-dimensional quality evaluation, we demonstrate that our automatically generated benchmarks achieve diversity comparable to manually curated alternatives. However, our experiments reveal that LLM performance remains limited in both abstract time series reasoning and domain-specific applications, highlighting ongoing challenges in enabling effective time series understanding in these models. TimeSeriesExamAgent is available at https://github.com/magwiazda/TimeSeriesExamAgent.