From Comprehension to Reasoning: A Hierarchical Benchmark for Automated Financial Research Reporting

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FinReasoning introduces a hierarchical benchmark for automated financial research report generation, aligning with real analyst workflows to assess semantic consistency, data alignment, and deep insight.
  • It highlights current LLMs' failures in factual accuracy, numerical consistency, and structured data formatting, creating risks in financial evaluations.
  • The evaluation framework includes a fine-grained 12-indicator rubric and stronger hallucination-correction metrics to diagnose analytical bottlenecks.
  • Results show an understanding-execution gap among models, and no model dominates across all tracks, with Doubao-Seed-1.8, GPT-5, and Kimi-K2 leading overall.
  • The FinReasoning resource is available at GitHub, enabling researchers to use and extend the benchmark.

Abstract

Large language models (LLMs) are increasingly used to generate financial research reports, shifting from auxiliary analytic tools to primary content producers. Yet recent real-world deployments reveal persistent failures--factual errors, numerical inconsistencies, fabricated references, and shallow analysis--that can distort assessments of corporate fundamentals and ultimately trigger severe economic losses. However, existing financial benchmarks focus on comprehension over completed reports rather than evaluating whether a model can produce reliable analysis. Moreover, current evaluation frameworks merely flag hallucinations and lack structured measures for deeper analytical skills, leaving key analytical bottlenecks undiscovered. To address these gaps, we introduce FinReasoning, a benchmark that decomposes Chinese research-report generation into three stages aligned with real analyst workflows, assessing semantic consistency, data alignment, and deep insight. We further propose a fine-grained evaluation framework that strengthens hallucination-correction assessment and incorporates a 12-indicator rubric for core analytical skills. Based on the evaluation results, FinReasoning reveals that most models exhibit a understanding-execution gap: they can identify errors but struggle to generate accurate corrections; they can retrieve data but have difficulty returning it in correct format. Furthermore, no model achieves overwhelming superiority across all three tracks; Doubao-Seed-1.8, GPT-5, and Kimi-K2 rank as the top three in overall performance, yet each exhibits a distinct capability distribution. The evaluation resource is available at https://github.com/TongjiFinLab/FinReasoning.