Dual Path Attribution: Efficient Attribution for SwiGLU-Transformers through Layer-Wise Target Propagation
arXiv cs.LG / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Dual Path Attribution (DPA) enables faithful attribution for SwiGLU-transformers using only one forward and one backward pass, without requiring counterfactuals.
- It analytically decomposes and linearizes the transformer’s computation into distinct propagation pathways for a targeted unembedding vector, yielding effective representations at each residual position.
- DPA achieves O(1) time complexity with respect to the number of model components, enabling efficient attribution on long input sequences and dense component analyses.
- Experiments on standard interpretability benchmarks show state-of-the-art faithfulness and substantially improved efficiency compared with existing baselines.
- The method focuses on the frozen transformer setting and advances understanding of information flow in LLMs, potentially informing the development of interpretable tooling.
Related Articles
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models
Reddit r/LocalLLaMA
Engenharia de Prompt: Por Que a Forma Como Você Pergunta Muda Tudo(Um guia introdutório)
Dev.to
The Obligor
Dev.to
The Markup
Dev.to
2026 年 AI 部落格變現完整攻略:從第一篇文章到月收入 $1000
Dev.to