Improving Sparse Autoencoder with Dynamic Attention

arXiv cs.LG / 4/17/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a key practical limitation of sparse autoencoders (SAEs): choosing the right sparsity level is difficult because too much sparsity hurts reconstruction and too little degrades interpretability.
  • It proposes a new SAE formulation built on a cross-attention architecture, where latent features serve as queries and a learnable dictionary provides key and value matrices.
  • The method uses sparsemax-based dynamic sparse attention to infer activation counts in a data-dependent way, aiming to avoid the need for extra sparsity regularization or carefully tuned hyperparameters.
  • Experiments and visualizations indicate improved reconstruction loss and high-quality learned concepts, with particular strength on top-n classification tasks.

Abstract

Recently, sparse autoencoders (SAEs) have emerged as a promising technique for interpreting activations in foundation models by disentangling features into a sparse set of concepts. However, identifying the optimal level of sparsity for each neuron remains challenging in practice: excessive sparsity can lead to poor reconstruction, whereas insufficient sparsity may harm interpretability. While existing activation functions such as ReLU and TopK provide certain sparsity guarantees, they typically require additional sparsity regularization or cherry-picked hyperparameters. We show in this paper that dynamically sparse attention mechanisms using sparsemax can bridge this trade-off, due to their ability to determine the activation numbers in a data-dependent manner. Specifically, we first explore a new class of SAEs based on the cross-attention architecture with the latent features as queries and the learnable dictionary as the key and value matrices. To encourage sparse pattern learning, we employ a sparsemax-based attention strategy that automatically infers a sparse set of elements according to the complexity of each neuron, resulting in a more flexible and general activation function. Through comprehensive evaluation and visualization, we show that our approach successfully achieves lower reconstruction loss while producing high-quality concepts, particularly in top-n classification tasks.