The Model Agreed, But Didn't Learn: Diagnosing Surface Compliance in Large Language Models

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing evaluations for LLM knowledge editing—often based on checking outputs under specific prompt conditions—may not truly verify that a model’s internal memory has been structurally modified.
  • It introduces a diagnostic framework using discriminative self-assessment under in-context learning (ICL) settings to better mirror real-world deployment behavior and detect subtle changes.
  • The study finds a widespread failure mode called “Surface Compliance,” where editors appear to succeed on benchmarks by mimicking target responses rather than overwriting underlying beliefs.
  • It reports that repeated/recursive memory modifications can leave “representational residues,” causing cognitive instability and reducing reversibility of the model’s memory state.
  • The authors conclude that current editing paradigms carry risks for long-term reliability and emphasize the need for robust methods and evaluation of genuine memory modification.

Abstract

Large Language Models (LLMs) internalize vast world knowledge as parametric memory, yet inevitably inherit the staleness and errors of their source corpora. Consequently, ensuring the reliability and malleability of these internal representations is imperative for trustworthy real-world deployment. Knowledge editing offers a pivotal paradigm for surgically modifying memory without retraining. However, while recent editors demonstrate high success rates on standard benchmarks, it remains questionable whether current evaluation frameworks that rely on assessing output under specific prompting conditions can reliably authenticate genuine memory modification. In this work, we introduce a simple diagnostic framework that subjects models to discriminative self-assessment under in-context learning (ICL) settings that better reflect real-world application environments, specifically designed to scrutinize the subtle behavioral nuances induced by memory modifications. This probing reveals a pervasive phenomenon of Surface Compliance, where editors achieve high benchmark scores by merely mimicking target outputs without structurally overwriting internal beliefs. Moreover, we find that recursive modifications accumulate representational residues, triggering cognitive instability and permanently diminishing the reversibility of the model's memory state. These insights underscore the risks of current editing paradigms and highlight the pivotal role of robust memory modification in building trustworthy, long-term sustainable LLM systems. Code is available at https://github.com/XiaojieGu/SA-MCQ.