Document-Level Numerical Reasoning across Single and Multiple Tables in Financial Reports

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while LLMs are strong at language understanding, they still struggle with reliable numerical QA over long, structured financial documents, especially when evidence must be combined across multiple tables and text.
  • It introduces FinLongDocQA, a new dataset covering both single-table and cross-table document-level numerical reasoning in long-context financial annual reports.
  • The evaluation finds two key bottlenecks for current LLMs: many annual reports exceed 129k tokens (making relevant table retrieval harder due to context rot) and multi-step arithmetic remains error-prone even after evidence is found.
  • To improve reliability, the authors propose FinLongDocAgent, a Multi-Agent, Multi-Round RAG system that iteratively retrieves evidence, executes intermediate calculations, and verifies results across multiple rounds.
  • Experiments emphasize that iterative retrieval combined with verification can substantially improve accuracy for numerical QA in long financial documents.

Abstract

Despite the strong language understanding abilities of large language models (LLMs), they still struggle with reliable question answering (QA) over long, structured documents, particularly for numerical reasoning. Financial annual reports exemplify this difficulty: financial statement analysis often hinges on accurate arithmetic, and analysts derive key indicators by integrating evidence scattered across multiple tables and narrative text. However, existing benchmarks focus largely on single-table settings, leaving cross-table document-level numerical reasoning underexplored. To address this gap, we introduce FinLongDocQA, a dataset for both single-table and cross-table financial numerical reasoning in long-context reports. Evaluating both closed-source and open-source LLMs on FinLongDocQA reveals two bottlenecks: (1) annual reports often exceed 129k tokens, exacerbating the context rot problem for locating relevant tables; and (2) even when relevant evidence is located, LLMs remain prone to errors in multi-step numerical reasoning. We propose FinLongDocAgent, a Multi-Agent Multi-Round Retrieval-Augmented Generation (RAG) approach that iteratively retrieves evidence, performs intermediate calculations, and verifies results across rounds. Experiments highlight the importance of iterative retrieval and verification for reliable numerical QA in long financial documents.