IndiaFinBench: An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text

arXiv cs.CL / 4/22/2026

📰 NewsSignals & Early TrendsIndustry & Market MovesModels & Research

Key Points

  • The paper introduces IndiaFinBench, a new public evaluation benchmark designed to measure large language model (LLM) performance on Indian financial regulatory text, a gap left by prior Western-only benchmarks.
  • The benchmark includes 406 expert-annotated question-answer pairs drawn from 192 SEBI and RBI documents, covering four task types: regulatory interpretation, numerical reasoning, contradiction detection, and temporal reasoning.
  • Annotation quality is supported by both model-based validation (kappa=0.918 for contradiction detection) and a human inter-annotator agreement study (kappa=0.611; 76.7% overall agreement).
  • In zero-shot evaluations of twelve models, accuracy ranges from 70.4% (Gemma 4 E4B) to 89.7% (Gemini 2.5 Flash), with all models outperforming a non-specialist human baseline of 60.0%.
  • Numerical reasoning shows the strongest differentiation across models, and bootstrap significance testing identifies three statistically distinct performance tiers; the dataset, evaluation code, and outputs are released on GitHub.

Abstract

We introduce IndiaFinBench, to our knowledge the first publicly available evaluation benchmark for assessing large language model (LLM) performance on Indian financial regulatory text. Existing financial NLP benchmarks draw exclusively from Western financial corpora (SEC filings, US earnings reports, and English-language financial news), leaving a significant gap in coverage of non-Western regulatory frameworks. IndiaFinBench addresses this gap with 406 expert-annotated question-answer pairs drawn from 192 documents sourced from the Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI), spanning four task types: regulatory interpretation (174 items), numerical reasoning (92 items), contradiction detection (62 items), and temporal reasoning (78 items). Annotation quality is validated through a model-based secondary pass (kappa=0.918 on contradiction detection) and a 60-item human inter-annotator agreement evaluation (kappa=0.611; 76.7% overall agreement). We evaluate twelve models under zero-shot conditions, with accuracy ranging from 70.4% (Gemma 4 E4B) to 89.7% (Gemini 2.5 Flash). All models substantially outperform a non-specialist human baseline of 60.0%. Numerical reasoning is the most discriminative task, with a 35.9 percentage-point spread across models. Bootstrap significance testing (10,000 resamples) reveals three statistically distinct performance tiers. The dataset, evaluation code, and all model outputs are available at https://github.com/rajveerpall/IndiaFinBench