EvidenceRL: Reinforcing Evidence Consistency for Trustworthy Language Models

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • EvidenceRL introduces a reinforcement learning framework that enforces evidence adherence during training to reduce hallucinations in LLMs, targeting high-stakes domains.
  • The framework scores candidate responses for grounding (entailment with retrieved evidence and context) and correctness (agreement with reference answers) and optimizes the generator using Group Relative Policy Optimization (GRPO).
  • In cardiac diagnosis, F1@3 rose from 37.0 to 54.5, grounding (G_max@3) improved from 47.6 to 78.2, hallucinations dropped nearly 5x, and evidence-supported diagnoses increased from 31.8% to 61.6%.
  • In legal reasoning, Faithfulness increased from 32.8% to 67.6% on an 8B model, showing consistent improvements across domains.
  • The authors have open-sourced the code on GitHub.

Abstract

Large Language Models (LLMs) are fluent but prone to hallucinations, producing answers that appear plausible yet are unsupported by available evidence. This failure is especially problematic in high-stakes domains where decisions must be justified by verifiable information. We introduce \textbf{EvidenceRL}, a reinforcement learning framework that enforces evidence adherence during training. EvidenceRL scores candidate responses for grounding (entailment with retrieved evidence and context) and correctness (agreement with reference answers) and optimizes the generator using Group Relative Policy Optimization (GRPO). We evaluate across two high-stakes domains, cardiac diagnosis and legal reasoning, where EvidenceRL consistently improves evidence grounding and faithfulness without sacrificing task accuracy. On cardiac diagnosis, F1@3 increases from 37.0 to 54.5 on Llama-3.2-3B while grounding (G_{\max}@3) rises from 47.6 to 78.2; hallucinations drop nearly 5\times and evidence-supported diagnoses increase from 31.8\% to 61.6\%. On legal reasoning, EvidenceRL raises Faithfulness from 32.8\% to 67.6\% on Llama-3.1-8B, demonstrating consistent behavioral change across domains. Our code is open-sourced at https://github.com/Wizaaard/EvidenceRL.git.