FinGround: Detecting and Grounding Financial Hallucinations via Atomic Claim Verification

arXiv cs.AI / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that financial LLMs can produce regulatory-harmful hallucinations such as fabricated metrics, citations, and incorrect derived calculations, especially as the EU AI Act enforcement deadline nears in August 2026.
  • It presents FinGround, a three-stage verify-then-ground pipeline for financial document QA that uses hybrid retrieval over text and tables, decomposes answers into atomic claims, and verifies them with finance-aware, type-routed methods including formula reconstruction.
  • FinGround rewrites unsupported claims with precise citations at the paragraph and table-cell level, improving grounding rather than merely detecting errors.
  • The authors introduce retrieval-equalized evaluation to isolate the contribution of verification from retrieval quality, reporting that FinGround still cuts hallucination rates by 68% under identical retrieval conditions.
  • Results include a 78% reduction versus GPT-4o using the full pipeline, and an 8B distilled detector achieving 91.4% F1 with 18x lower per-claim latency, making low-cost deployment feasible and supported by an analyst pilot.

Abstract

Financial AI systems must produce answers grounded in specific regulatory filings, yet current LLMs fabricate metrics, invent citations, and miscalculate derived quantities. These errors carry direct regulatory consequences as the EU AI Act's high-risk enforcement deadline approaches (August 2026). Existing hallucination detectors treat all claims uniformly, missing 43% of computational errors that require arithmetic re-verification against structured tables. We present FinGround, a three-stage verify-then-ground pipeline for financial document QA. Stage 1 performs finance-aware hybrid retrieval over text and tables. Stage 2 decomposes answers into atomic claims classified by a six-type financial taxonomy and verified with type-routed strategies including formula reconstruction. Stage 3 rewrites unsupported claims with paragraph- and table-cell-level citations. To cleanly isolate verification value from retrieval quality, we propose retrieval-equalized evaluation as standard methodology for RAG verification research: when all systems receive identical retrieval, FinGround still reduces hallucination rates by 68% over the strongest baseline (p < 0.01). The full pipeline achieves a 78% reduction relative to GPT-4o. An 8B distilled detector retains 91.4% F1 at 18x lower per-claim latency, enabling $0.003/query deployment, supported by qualitative signals from a four-week analyst pilot.