Purging the Gray Zone: Latent-Geometric Denoising for Precise Knowledge Boundary Awareness

arXiv cs.CL / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses LLM hallucinations by focusing on the model’s difficulty in recognizing its own knowledge boundaries, especially near decision boundaries.
  • It argues that a “gray zone” exists close to the decision hyperplane where internal belief ambiguity—not just labeling—drives poor performance and leads to more abstentions or hallucinations.
  • The authors propose GeoDe (Geometric Denoising) for abstention fine-tuning, building a truth hyperplane via linear probes and using geometric distance to estimate confidence for abstention.
  • Their experiments on models such as Llama3 and Qwen3 across TriviaQA, NQ, SciQ, and SimpleQA show improved truthfulness and strong out-of-distribution generalization.
  • The project provides an implementation at the linked GitHub repository, enabling others to reproduce and build on the method.

Abstract

Large language models (LLMs) often exhibit hallucinations due to their inability to accurately perceive their own knowledge boundaries. Existing abstention fine-tuning methods typically partition datasets directly based on response accuracy, causing models to suffer from severe label noise near the decision boundaries and consequently exhibit high rates of abstentions or hallucinations. This paper adopts a latent space representation perspective, revealing a "gray zone" near the decision hyperplane where internal belief ambiguity constitutes the core performance bottleneck. Based on this insight, we propose the **GeoDe** (**Geo**metric **De**noising) framework for abstention fine-tuning. This method constructs a truth hyperplane using linear probes and performs "geometric denoising" by employing geometric distance as a confidence signal for abstention decisions. This approach filters out ambiguous boundary samples while retaining high-fidelity signals for fine-tuning. Experiments across multiple models (Llama3, Qwen3) and benchmark datasets (TriviaQA, NQ, SciQ, SimpleQA) demonstrate that GeoDe significantly enhances model truthfulness and demonstrates strong generalization in out-of-distribution (OOD) scenarios. Code is available at https://github.com/Notbesidemoon/GeoDe.