Weight Patching: Toward Source-Level Mechanistic Localization in LLMs

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Weight Patching,” a parameter-space intervention technique aimed at mechanistic interpretability that distinguishes true capability-encoding parameters from modules that only amplify upstream signals.
  • It operates on two same-architecture models—a base model and a behavior-specialized counterpart—by replacing selected module weights from the specialized model into the base model for a fixed input to probe causal sources.
  • The authors instantiate the method for instruction following and propose a vector-anchor behavioral interface that acts as a shared internal criterion for detecting whether a task-relevant control state has formed or been recovered during open-ended generation.
  • Using this framework, the analysis identifies a multi-stage hierarchy of causal components, ranging from shallow “carrier” candidates through aggregation/routing modules to downstream execution circuits.
  • The paper also shows that component “recovered scores” can support mechanism-aware model merging, enabling more selective fusion across expert combinations and offering external validation.

Abstract

Mechanistic interpretability seeks to localize model behavior to the internal components that causally realize it. Prior work has advanced activation-space localization and causal tracing, but modules that appear important in activation space may merely aggregate or amplify upstream signals rather than encode the target capability in their own parameters. To address this gap, we propose Weight Patching, a parameter-space intervention method for source-oriented analysis in paired same-architecture models that differ in how strongly they express a target capability under the inputs of interest. Given a base model and a behavior-specialized counterpart, Weight Patching replaces selected module weights from the specialized model into the base model under a fixed input. We instantiate the method on instruction following and introduce a framework centered on a vector-anchor behavioral interface that provides a shared internal criterion for whether a task-relevant control state has been formed or recovered in open-ended generation. Under this framework, the analysis reveals a hierarchy from shallow candidate source-side carriers to aggregation and routing modules, and further to downstream execution circuits. The recovered component scores can also guide mechanism-aware model merging, improving selective fusion across the evaluated expert combinations and providing additional external validation.