AI Navigate

ESG-Bench: Benchmarking Long-Context ESG Reports for Hallucination Mitigation

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ESG-Bench introduces a benchmark dataset for understanding ESG reports and mitigating hallucinations in large language models (LLMs).
  • The dataset provides human-annotated question-answer pairs grounded in real ESG contexts, with fine-grained labels indicating whether outputs are factually supported or hallucinated.
  • The work frames ESG analysis as a verifiable QA task and develops task-specific Chain-of-Thought prompting strategies, alongside fine-tuning LLMs with CoT-annotated rationales.
  • Experiments show CoT-based methods substantially reduce hallucinations and outperform standard prompting and direct fine-tuning, with gains transferring to QA benchmarks beyond ESG.
  • This benchmark enables scalable, trustworthy analysis in compliance-critical settings and advances evaluation of LLMs’ ability to extract and reason over ESG content.

Abstract

As corporate responsibility increasingly incorporates environmental, social, and governance (ESG) criteria, ESG reporting is becoming a legal requirement in many regions and a key channel for documenting sustainability practices and assessing firms' long-term and ethical performance. However, the length and complexity of ESG disclosures make them difficult to interpret and automate the analysis reliably. To support scalable and trustworthy analysis, this paper introduces ESG-Bench, a benchmark dataset for ESG report understanding and hallucination mitigation in large language models (LLMs). ESG-Bench contains human-annotated question-answer (QA) pairs grounded in real-world ESG report contexts, with fine-grained labels indicating whether model outputs are factually supported or hallucinated. Framing ESG report analysis as a QA task with verifiability constraints enables systematic evaluation of LLMs' ability to extract and reason over ESG content and provides a new use case: mitigating hallucinations in socially sensitive, compliance-critical settings. We design task-specific Chain-of-Thought (CoT) prompting strategies and fine-tune multiple state-of-the-art LLMs on ESG-Bench using CoT-annotated rationales. Our experiments show that these CoT-based methods substantially outperform standard prompting and direct fine-tuning in reducing hallucinations, and that the gains transfer to existing QA benchmarks beyond the ESG domain.