BEDTime: A Unified Benchmark for Automatically Describing Time Series

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces BEDTime, a unified benchmark that evaluates how well models can recognize, differentiate, and generate structural descriptions of univariate time series.
  • BEDTime includes five datasets reformatted across three modalities to support cross-modal evaluation of time series understanding.
  • Experiments on 17 state-of-the-art models show that dedicated time-series-language models underperform, vision-language models perform comparatively well, and language-only methods perform worst.
  • The study finds all evaluated approaches are fragile under real-world robustness tests, highlighting gaps in current multi-modal time-series modeling and directions for future research.

Abstract

Recent works propose complex multi-modal models that handle both time series and language, ultimately claiming high performance on complex tasks like time series reasoning and cross-modal question answering. However, they skip foundational evaluations that such complex models should have mastered. So we ask a simple question: \textit{How well can recent models describe structural properties of time series?} To answer this, we propose that successful models should be able to \textit{recognize}, \textit{differentiate}, and \textit{generate} descriptions of univariate time series. We then create \textbf{\benchmark}, a benchmark to assess these novel tasks, that comprises \textbf{five datasets} reformatted across \textbf{three modalities}. In evaluating \textbf{17 state-of-the-art models}, we find that (1) surprisingly, dedicated time series-language models fall short, despite being designed for similar tasks, (2) vision language models are quite capable, (3) language only methods perform worst, despite many lauding their potential, and (4) all approaches are clearly fragile to a range of real world robustness tests, indicating directions for future work. Together, our findings critique prior works' claims and provide avenues for advancing multi-modal time series modeling.