Preconditioned DeltaNet: Curvature-aware Sequence Modeling for Linear Recurrences

arXiv cs.LG / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces “Preconditioned DeltaNet,” a curvature-aware variant of delta-rule recurrent models aimed at alleviating long-context compute limits seen in softmax attention.
  • It frames recurrences using the test-time regression (TTR) view as online least-squares updates that learn a linear mapping from keys to values, highlighting that prior delta-rule recurrences neglected curvature during optimization.
  • The authors derive theoretical equivalences between linear attention and the delta rule under an exactly preconditioned setting, then implement a practical diagonal preconditioning approximation.
  • They build preconditioned versions of DeltaNet, GDN, and KDA and provide efficient chunkwise parallel computation methods to make them scalable.
  • Experiments show consistent gains for preconditioned delta-rule recurrences on synthetic recall benchmarks and language modeling at 340M and 1B parameter scales.

Abstract

To address the increasing long-context compute limitations of softmax attention, several subquadratic recurrent operators have been developed. This work includes models such as Mamba-2, DeltaNet, Gated DeltaNet (GDN), and Kimi Delta Attention (KDA). As the space of recurrences grows, a parallel line of work has arisen to taxonomize them. One compelling view is the test-time regression (TTR) framework, which interprets recurrences as performing online least squares updates that learn a linear map from the keys to values. Existing delta-rule recurrences can be seen as first-order approximations to this objective, but notably ignore the curvature of the least-squares loss during optimization. In this work, we address this by introducing preconditioning to these recurrences. Starting from the theory of online least squares, we derive equivalences between linear attention and the delta rule in the exactly preconditioned case. Next, we realize this theory in practice by proposing a diagonal approximation: this enables us to introduce preconditioned variants of DeltaNet, GDN, and KDA alongside efficient chunkwise parallel algorithms for computing them. Empirically, we find that our preconditioned delta-rule recurrences yield consistent performance improvements across synthetic recall benchmarks and language modeling at the 340M and 1B scale.