Constraining Sequential Model Editing with Editing Anchor Compression

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that as sequential model edits accumulate, an LLM’s parameter matrix can deviate sharply from its prior state, breaking original knowledge links and degrading general abilities on downstream tasks.
  • It proposes Editing Anchor Compression (EAC), which limits parameter drift during sequential editing by compressing edit information into selected “editing anchors.”
  • EAC aims to capture new relations while constraining changes so the model retains its pre-edit capabilities more effectively.
  • Experiments applying EAC to two existing editing methods across three LLMs and four tasks show that it preserves over 70% of general abilities while retaining editing knowledge better than baseline approaches.
  • Overall, the work provides a targeted technique to improve the reliability of sequential model editing without requiring full retraining.

Abstract

Large language models (LLMs) struggle with hallucinations due to false or outdated knowledge. Given the high resource demands of retraining these models, there is an increasing focus on developing model editing. However, the general abilities of LLMs across downstream tasks are prone to significant degradation during sequential editing. This paper statistically observes that the parameter matrix after editing exhibits a significant deviation compared to its previous state as the number of edits increases. This serious deviation affects the original knowledge associations within LLMs and leads to the degradation of their general abilities. To this end, a framework termed Editing Anchor Compression (EAC) is proposed to constrain the deviation of the parameter matrix during sequential editing. It compresses the editing information by selecting editing anchors that are important in encoding new relations without deviating too much from the original matrix, thereby preserving the general abilities. Experiments of applying EAC to two popular editing methods on three LLMs across four tasks are conducted. Evaluation results show that EAC effectively minimizes unreasonable deviations caused by model editing, preserving over 70% of the general abilities while better retaining the editing knowledge compared to the original counterpart methods.