KARL: Mitigating Hallucinations in LLMs via Knowledge-Boundary-Aware Reinforcement Learning

arXiv cs.LG / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that hallucination mitigation for LLMs requires not only learning to abstain, but doing so in a way that respects the model’s actual knowledge boundary.
  • It introduces KARL, which estimates the LLM’s knowledge boundary online using within-group response statistics and uses a Knowledge-Boundary-Aware Reward to encourage accurate answers or appropriate abstentions.
  • KARL also includes a Two-Stage RL training approach that first explores the knowledge boundary to avoid an “abstention trap,” then transforms incorrect answers outside the boundary into abstentions while preserving accuracy.
  • Experiments across multiple benchmarks show KARL improves the accuracy–hallucination trade-off and suppresses hallucinations without degrading performance for both in-distribution and out-of-distribution cases.

Abstract

Enabling large language models (LLMs) to appropriately abstain from answering questions beyond their knowledge is crucial for mitigating hallucinations. While existing reinforcement learning methods foster autonomous abstention, they often compromise answer accuracy because their static reward mechanisms, agnostic to models' knowledge boundaries, drive models toward excessive caution. In this work, we propose KARL, a novel framework that continuously aligns an LLM's abstention behavior with its evolving knowledge boundary. KARL introduces two core innovations: a Knowledge-Boundary-Aware Reward that performs online knowledge boundary estimation using within-group response statistics, dynamically rewarding correct answers or guided abstention; and a Two-Stage RL Training Strategy that first explores the knowledge boundary and bypasses the "abstention trap", and subsequently converts incorrect answers beyond the knowledge boundary into abstentions without sacrificing accuracy. Extensive experiments on multiple benchmarks demonstrate that KARL achieves a superior accuracy-hallucination trade-off, effectively suppressing hallucinations while maintaining high accuracy across both in-distribution and out-of-distribution scenarios.