Time Series Augmented Generation for Financial Applications

arXiv cs.AI / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper tackles a long-standing problem in evaluating LLM reasoning for quantitative finance, noting that many benchmarks don’t cleanly test the agent’s true query parsing and computation orchestration skills.
  • It proposes a new evaluation methodology and benchmark specifically for financial time-series analysis, using tool-augmented LLM agents that delegate computations to verifiable external tools.
  • Using its Time Series Augmented Generation (TSAG) framework, the authors run a large empirical study across multiple state-of-the-art agents (e.g., GPT-4o, Llama 3, and Qwen2).
  • The benchmark includes 100 financial questions and measures tool-selection accuracy, faithfulness, and hallucination, finding that strong agents can reach near-perfect tool-use accuracy with minimal hallucinations.
  • The authors’ main deliverables are the public evaluation framework and empirical insights aimed at enabling more standardized research on reliable financial AI.

Abstract

Evaluating the reasoning capabilities of Large Language Models (LLMs) for complex, quantitative financial tasks is a critical and unsolved challenge. Standard benchmarks often fail to isolate an agent's core ability to parse queries and orchestrate computations. To address this, we introduce a novel evaluation methodology and benchmark designed to rigorously measure an LLM agent's reasoning for financial time-series analysis. We apply this methodology in a large-scale empirical study using our framework, Time Series Augmented Generation (TSAG), where an LLM agent delegates quantitative tasks to verifiable, external tools. Our benchmark, consisting of 100 financial questions, is used to compare multiple SOTA agents (e.g., GPT-4o, Llama 3, Qwen2) on metrics assessing tool selection accuracy, faithfulness, and hallucination. The results demonstrate that capable agents can achieve near-perfect tool-use accuracy with minimal hallucination, validating the tool-augmented paradigm. Our primary contribution is this evaluation framework and the corresponding empirical insights into agent performance, which we release publicly to foster standardized research on reliable financial AI.