Can Large Language Models Adequately Perform Symbolic Reasoning Over Time Series?

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that extracting interpretable, context-aligned symbolic laws from time-series data is still a largely open problem for large language models despite their strengths in structured reasoning.
  • It introduces SymbolBench, a benchmark for symbolic reasoning over real-world time series spanning multivariate symbolic regression, Boolean network inference, and causal discovery, covering more diverse and complex symbolic forms than prior work.
  • The authors propose a closed-loop framework combining LLMs with genetic programming, where LLMs serve both as predictors and evaluators to refine symbolic hypotheses over iterations.
  • Experiments show both strengths and limitations of current approaches, emphasizing the need for domain knowledge, context alignment, and explicit reasoning structure to better support automated scientific discovery.
  • A publicly available implementation is provided via the project’s GitHub repository.

Abstract

Uncovering hidden symbolic laws from time series data, as an aspiration dating back to Kepler's discovery of planetary motion, remains a core challenge in scientific discovery and artificial intelligence. While Large Language Models show promise in structured reasoning tasks, their ability to infer interpretable, context-aligned symbolic structures from time series data is still underexplored. To systematically evaluate this capability, we introduce SymbolBench, a comprehensive benchmark designed to assess symbolic reasoning over real-world time series across three tasks: multivariate symbolic regression, Boolean network inference, and causal discovery. Unlike prior efforts limited to simple algebraic equations, SymbolBench spans a diverse set of symbolic forms with varying complexity. We further propose a unified framework that integrates LLMs with genetic programming to form a closed-loop symbolic reasoning system, where LLMs act both as predictors and evaluators. Our empirical results reveal key strengths and limitations of current models, highlighting the importance of combining domain knowledge, context alignment, and reasoning structure to improve LLMs in automated scientific discovery. https://github.com/nuuuh/SymbolBench.