AI Navigate

SCAN: Sparse Circuit Anchor Interpretable Neuron for Lifelong Knowledge Editing

arXiv cs.AI / 3/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SCAN introduces a sparse editing framework to address catastrophic forgetting in lifelong knowledge editing of LLMs by using sparse circuit anchored neurons.
  • The approach uses mechanism-aware manipulation through Sparse Transcoders to construct a knowledge circuit, moving beyond coarse, dense parameter interventions.
  • Experiments on Gemma2, Qwen3, and Llama3.1 across CounterFact, ZsRE, and WikiFactDiff show SCAN achieving superior performance and maintaining model integrity after 3,000 sequential edits, unlike competing methods.
  • Results indicate SCAN mitigates model collapse during continual editing, preserving accuracy on benchmarks like MMLU and GSM8K while editing.

Abstract

Large Language Models (LLMs) often suffer from catastrophic forgetting and collapse during sequential knowledge editing. This vulnerability stems from the prevailing dense editing paradigm, which treats models as black boxes and relies on coarse-grained parameter interventions that inevitably disrupt preserved knowledge. To address this, we propose SCAN (a sparse editing framework based on Sparse Circuit Anchored Neuron) which transforms editing into a mechanism-aware manipulation by constructing a knowledge circuit via Sparse Transcoders. Experiments on Gemma2, Qwen3, and Llama3.1 across CounterFact, ZsRE and WikiFactDiff demonstrate that SCAN achieves a superior performance, maintaining model integrity on benchmarks like MMLU and GSM8K even after 3,000 sequential edits, whereas other existing methods deteriorate progressively as editing accumulates, eventually resulting in model collapse.