Differentiable Faithfulness Alignment for Cross-Model Circuit Transfer

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Differentiable Faithfulness Alignment (DFA), a framework for transferring mechanistic circuit information from a smaller source language model to a larger target model without fully discovering circuits in the target.
  • DFA learns a differentiable mapping by projecting source-model node importance scores into the target model and optimizing a soft “faithfulness” objective to avoid expensive, model-specific circuit search.
  • Experiments across Llama-3 and Qwen-2.5 on six tasks (factual retrieval, multiple-choice reasoning, and arithmetic) show the best performance for Llama-3 1B→3B, where transferred circuits can be competitive with direct node attribution.
  • The effectiveness declines as the source–target gap grows and is substantially lower on Qwen-2.5, indicating that cross-model transfer is harder under larger architectural and scaling differences.
  • Overall, DFA outperforms simple baselines and, in some cases, recovers target-model circuits with faithfulness comparable to or better than direct attribution, suggesting smaller models can provide useful mechanistic priors.

Abstract

Mechanistic interpretability has made it possible to localize circuits underlying specific behaviors in language models, but existing methods are expensive, model-specific, and difficult to scale to larger architectures. We introduce \textbf{Differentiable Faithfulness Alignment (DFA)}, a framework that transfers circuit information from a smaller source model to a larger target model through a learned differentiable alignment. DFA projects source-model node importance scores into the target model and trains this mapping with a soft faithfulness objective, avoiding full circuit discovery on the target model. We evaluate DFA on Llama-3 and Qwen-2.5 across six tasks spanning factual retrieval, multiple-choice reasoning, and arithmetic. The strongest results occur on Llama-3 1B\rightarrow3B, where aligned circuits are often competitive with direct node attribution and zero-shot transfer remains effective. Recovery weakens for larger source--target gaps and is substantially lower on Qwen-2.5, suggesting that transfer becomes harder as architectural and scaling differences increase. Overall, DFA consistently outperforms simple baselines and, in some settings, recovers target-model circuits with faithfulness comparable to or stronger than direct attribution. These results suggest that smaller models can provide useful mechanistic priors for larger ones, while highlighting both the promise and the limits of node-level cross-model circuit alignment.\footnote{Code is available at https://github.com/jasonshaoshun/dfa-circuits.