Disentangling Knowledge Representations for Large Language Model Editing

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM knowledge editing methods can accidentally overwrite fine-grained “irrelevant” facts that share a subject with the target, due to entangled subject representations.
  • It proposes DiKE (Disentangling Knowledge representations for LLM Editing), which splits a subject representation into target-related and target-unrelated components and updates only the target-related part.
  • DiKE includes a Knowledge Representation Disentanglement (KRD) module plus a disentanglement-based Knowledge Edit (DKE) module explicitly designed to preserve unrelated knowledge.
  • The authors derive an efficient, minimally invasive closed-form rank-one parameter update using matrix theory.
  • They introduce the FINE-KED benchmark to rigorously test preservation of fine-grained irrelevant knowledge under varying relational similarity, and report improved preservation with competitive overall editing performance across multiple LLMs.

Abstract

Knowledge Editing has emerged as a promising solution for efficiently updating embedded knowledge in large language models (LLMs). While existing approaches demonstrate effectiveness in integrating new knowledge and preserving the original capabilities of LLMs, they fail to maintain fine-grained irrelevant knowledge, namely facts that share the same subject as edited knowledge but differ in relation and object. This challenge arises because subject representations inherently encode multiple attributes, causing the target and fine-grained irrelevant knowledge to become entangled in the representation space, and thus vulnerable to unintended alterations during editing. To address this, we propose DiKE, a novel approach that Disentangles Knowledge representations for LLM Editing (DiKE). DiKE consists of two key components: a Knowledge Representation Disentanglement (KRD) module that decomposes the subject representation into target-knowledge-related and -unrelated components, and a Disentanglementbased Knowledge Edit (DKE) module that updates only the target-related component while explicitly preserving the unrelated one. We further derive a closedform, rank-one parameter update based on matrix theory to enable efficient and minimally invasive edits. To rigorously evaluate fine-grained irrelevant knowledge preservation, we construct FINE-KED, a new benchmark comprising fine-grained irrelevant knowledge at different levels of relational similarity to the edited knowledge. Extensive experiments across multiple LLMs demonstrate that DiKE substantially improves fine-grained irrelevant knowledge preservation while maintaining competitive general editing performance.