Self-Debias: Self-correcting for Debiasing Large Language Models

arXiv cs.CL / 4/10/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a “Bias Propagation” problem in LLM chain-of-thought reasoning, where social biases can continue cascading once triggered.
  • It proposes Self-Debias, a progressive, intrinsic self-correction framework that reallocates probability mass from biased heuristics toward unbiased reasoning paths.
  • Unlike broad penalty-based preference optimization, Self-Debias uses a fine-grained trajectory-level objective with dynamic debiasing constraints to revise biased reasoning suffixes while keeping correct context prefixes.
  • The method includes an online self-improvement loop via consistency filtering to automatically generate supervision signals, enabling stronger performance with only ~20k annotated samples and without continuous external oversight.

Abstract

Although Large Language Models (LLMs) demonstrate remarkable reasoning capabilities, inherent social biases often cascade throughout the Chain-of-Thought (CoT) process, leading to continuous "Bias Propagation". Existing debiasing methods primarily focus on static constraints or external interventions, failing to identify and interrupt this propagation once triggered. To address this limitation, we introduce Self-Debias, a progressive framework designed to instill intrinsic self-correction capabilities. Specifically, we reformulate the debiasing process as a strategic resource redistribution problem, treating the model's output probability mass as a limited resource to be reallocated from biased heuristics to unbiased reasoning paths. Unlike standard preference optimization which applies broad penalties, Self-Debias employs a fine-grained trajectory-level objective subject to dynamic debiasing constraints. This enables the model to selectively revise biased reasoning suffixes while preserving valid contextual prefixes. Furthermore, we integrate an online self-improvement mechanism utilizing consistency filtering to autonomously synthesize supervision signals. With merely 20k annotated samples, Self-Debias activates efficient self-correction, achieving superior debiasing performance while preserving general reasoning capabilities without continuous external oversight.