AI Navigate

FinTradeBench: A Financial Reasoning Benchmark for LLMs

arXiv cs.CL / 3/20/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • FinTradeBench introduces a benchmark for financial reasoning in LLMs by integrating company fundamentals and trading signals across 1,400 NASDAQ-100 questions over a ten-year window.
  • It groups questions into fundamentals-focused, trading-signal-focused, and hybrid categories to test cross-signal reasoning.
  • The authors adopt a calibration-then-scaling framework with seed questions, multi-model responses, self-filtering, numerical auditing, and human-LLM judge alignment to ensure reliable evaluation.
  • Evaluation of 14 LLMs shows retrieval-augmented setups improve arithmetic/textual fundamentals reasoning but offer limited gains for trading-signal reasoning, revealing current limits in numerical/time-series understanding.
  • The work highlights directions for future research in financial intelligence and improving LLMs for finance.

Abstract

Real-world financial decision-making is a challenging problem that requires reasoning over heterogeneous signals, including company fundamentals derived from regulatory filings and trading signals computed from price dynamics. Recently, with the advancement of Large Language Models (LLMs), financial analysts have begun to use them for financial decision-making tasks. However, existing financial question answering benchmarks for testing these models primarily focus on company balance sheet data and rarely evaluate reasoning over how company stocks trade in the market or their interactions with fundamentals. To take advantage of the strengths of both approaches, we introduce FinTradeBench, a benchmark for evaluating financial reasoning that integrates company fundamentals and trading signals. FinTradeBench contains 1,400 questions grounded in NASDAQ-100 companies over a ten-year historical window. The benchmark is organized into three reasoning categories: fundamentals-focused, trading-signal-focused, and hybrid questions requiring cross-signal reasoning. To ensure reliability at scale, we adopt a calibration-then-scaling framework that combines expert seed questions, multi-model response generation, intra-model self-filtering, numerical auditing, and human-LLM judge alignment. We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap. Retrieval substantially improves reasoning over textual fundamentals, but provides limited benefit for trading-signal reasoning. These findings highlight fundamental challenges in the numerical and time-series reasoning for current LLMs and motivate future research in financial intelligence.