Mechanistic Circuit-Based Knowledge Editing in Large Language Models

arXiv cs.CL / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MCircKE, a mechanistic, circuit-based framework for knowledge editing in large language models, aiming to update pre-trained knowledge more reliably in dynamic settings.
  • It targets the “Reasoning Gap” by mapping the causal circuits behind a reasoning task, including both where the fact is stored and how its logical consequences are routed through multi-step chains.
  • MCircKE performs a “map-and-adapt” procedure by surgically updating only the parameters within the identified circuit rather than applying broad or isolated fact patches.
  • Experiments on the MQuAKE-3K benchmark show improved effectiveness for multi-hop reasoning after knowledge edits.

Abstract

Deploying Large Language Models (LLMs) in real-world dynamic environments raises the challenge of updating their pre-trained knowledge. While existing knowledge editing methods can reliably patch isolated facts, they frequently suffer from a "Reasoning Gap", where the model recalls the edited fact but fails to utilize it in multi-step reasoning chains. To bridge this gap, we introduce MCircKE (\underline{M}echanistic \underline{Circ}uit-based \underline{K}nowledge \underline{E}diting), a novel framework that enables a precise "map-and-adapt" editing procedure. MCircKE first identifies the causal circuits responsible for a specific reasoning task, capturing both the storage of the fact and the routing of its logical consequences. It then surgically update parameters exclusively within this mapped circuit. Extensive experiments on the MQuAKE-3K benchmark demonstrate the effectiveness of the proposed method for multi-hop reasoning in knowledge editing.