AI Navigate

UGID: Unified Graph Isomorphism for Debiasing Large Language Models

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • UGID models the Transformer as a structured computational graph where attention routing defines edges and hidden states define nodes to debias large language models at the internal-representation level.
  • Debiasing is formulated as enforcing invariance of the graph structure across counterfactual inputs, allowing differences only on sensitive attributes to prevent bias migration across components.
  • The approach introduces a log-space constraint on sensitive logits and a selective anchor-based objective to preserve definitional semantics while aligning behavior.
  • Experiments on large language models show significant bias reduction in both in-distribution and out-of-distribution settings, with reduced internal structural discrepancies and preserved safety and utility.

Abstract

Large language models (LLMs) exhibit pronounced social biases. Output-level or data-optimization--based debiasing methods cannot fully resolve these biases, and many prior works have shown that biases are embedded in internal representations. We propose \underline{U}nified \underline{G}raph \underline{I}somorphism for \underline{D}ebiasing large language models (\textit{\textbf{UGID}}), an internal-representation--level debiasing framework for large language models that models the Transformer as a structured computational graph, where attention mechanisms define the routing edges of the graph and hidden states define the graph nodes. Specifically, debiasing is formulated as enforcing invariance of the graph structure across counterfactual inputs, with differences allowed only on sensitive attributes. \textit{\textbf{UGID}} jointly constrains attention routing and hidden representations in bias-sensitive regions, effectively preventing bias migration across architectural components. To achieve effective behavioral alignment without degrading general capabilities, we introduce a log-space constraint on sensitive logits and a selective anchor-based objective to preserve definitional semantics. Extensive experiments on large language models demonstrate that \textit{\textbf{UGID}} effectively reduces bias under both in-distribution and out-of-distribution settings, significantly reduces internal structural discrepancies, and preserves model safety and utility.