ARFBench: Benchmarking Time Series Question Answering Ability for Software Incident Response

arXiv cs.LG / 4/24/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces ARFBench, a new benchmark for time series question answering (TSQA) focused on detecting and reasoning about anomalies in software incident data.
  • ARFBench includes 750 questions spanning 142 time series and 5.38M data points from 63 production incidents, sourced exclusively from Datadog’s internal telemetry.
  • Evaluations across proprietary and open-source LLMs/VLMs and time series foundation models show that frontier VLMs outperform prior baselines, with GPT-5 leading at 62.7% accuracy and 51.9% F1.
  • The authors propose a specialized TSFM+VLM hybrid approach and demonstrate that a post-trained prototype can reach comparable overall performance using a smaller amount of synthetic and real data.
  • They also find complementary strengths between models and human experts by defining oracle selectors, reaching 82.8% F1 and 87.2% accuracy and setting a new “superhuman” frontier for future TSQA systems.

Abstract

Time series question-answering (TSQA), in which we ask natural language questions to infer and reason about properties of time series, is a promising yet underexplored capability of foundation models. In this work, we present ARFBench, a TSQA benchmark that evaluates the understanding of multimodal foundation models (FMs) on time series anomalies prevalent in software incident data. ARFBench consists of 750 questions across 142 time series and 5.38M data points from 63 production incidents sourced exclusively from internal telemetry at Datadog. We evaluate leading proprietary and open-source LLMs, VLMs, and time series FMs and observe that frontier VLMs perform markedly better than existing baselines; the leading model (GPT-5) achieves a 62.7% accuracy and 51.9% F1. We next demonstrate the promise of specialized multimodal approaches. We develop a novel TSFM + VLM hybrid prototype which we post-train on a small set of synthetic and real data that yields comparable overall F1 and accuracy with frontier models. Lastly, we find models and human domain experts exhibit complementary strengths. We define a model-expert oracle, a best-of-2 oracle selector over model and expert answers, yielding 82.8% F1 and 87.2% accuracy and establishing a new superhuman frontier for future TSQA models. The benchmark is available at https://huggingface.co/datasets/Datadog/ARFBench.

ARFBench: Benchmarking Time Series Question Answering Ability for Software Incident Response | AI Navigate