SCAN: Sparse Circuit Anchor Interpretable Neuron for Lifelong Knowledge Editing
arXiv cs.AI / 3/17/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- SCAN introduces a sparse editing framework to address catastrophic forgetting in lifelong knowledge editing of LLMs by using sparse circuit anchored neurons.
- The approach uses mechanism-aware manipulation through Sparse Transcoders to construct a knowledge circuit, moving beyond coarse, dense parameter interventions.
- Experiments on Gemma2, Qwen3, and Llama3.1 across CounterFact, ZsRE, and WikiFactDiff show SCAN achieving superior performance and maintaining model integrity after 3,000 sequential edits, unlike competing methods.
- Results indicate SCAN mitigates model collapse during continual editing, preserving accuracy on benchmarks like MMLU and GSM8K while editing.




