AI Navigate

Compiler-First State Space Duality and Portable $O(1)$ Autoregressive Caching for Inference

arXiv cs.LG / 3/11/2026

Developer Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper presents a new approach to implementing state-space model inference using XLA compiler optimizations, eliminating the need for custom CUDA or Triton kernels tied to NVIDIA hardware.
  • The Mamba-2 algorithm leverages state space duality with diagonal state structures and chunkable recurrences, mapping well to XLA's fusion and tiling techniques for efficient computation.
  • The implementation supports full inference workflows including prefill and cached autoregressive decoding with O(1) state management and runs unmodified across CPU, NVIDIA GPU, and Google Cloud TPU.
  • Performance benchmarks on TPU v6e show high efficiency, achieving up to 140 TFLOPS on prefill and 64% bandwidth utilization on decoding, with results fully matching PyTorch/CUDA references.
  • The approach is generalizable to other state-space model recurrences meeting structural conditions and is released as open-source within the Bonsai JAX model library.

Computer Science > Machine Learning

arXiv:2603.09555 (cs)
[Submitted on 10 Mar 2026]

Title:Compiler-First State Space Duality and Portable $O(1)$ Autoregressive Caching for Inference

View a PDF of the paper titled Compiler-First State Space Duality and Portable $O(1)$ Autoregressive Caching for Inference, by Cosmo Santoni
View PDF HTML (experimental)
Abstract:State-space model releases are typically coupled to fused CUDA and Triton kernels, inheriting a hard dependency on NVIDIA hardware. We show that Mamba-2's state space duality algorithm -- diagonal state structure, chunkable recurrence, and einsum-dominated compute with static control flow -- maps cleanly onto what XLA's fusion and tiling passes actually optimise, making custom kernels optional rather than required. We implement the full inference path (prefill, cached autoregressive decoding) as shaped standard primitives under XLA, without hand-written kernels, and realise the architecture's theoretical $O(1)$ state management as a compiled on-device cache requiring no host synchronisation during generation. The implementation runs unmodified on CPU, NVIDIA GPU, and Google Cloud TPU from a single JAX source. On TPU v6e across five model scales (130M--2.7B parameters), XLA-generated code reaches approximately 140 TFLOPS on single-stream prefill ($15%$ MFU) and up to $64%$ bandwidth utilisation on decode. Greedy decoding matches the PyTorch/CUDA reference token-for-token across 64 steps, with hidden-state agreement within float32 rounding tolerance. The pattern transfers to any SSM recurrence satisfying the same structural conditions, on any platform with a mature XLA backend. The implementation is publicly available at this https URL and merged into the Bonsai JAX model library.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Performance (cs.PF)
Cite as: arXiv:2603.09555 [cs.LG]
  (or arXiv:2603.09555v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09555
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Cosmo Santoni [view email]
[v1] Tue, 10 Mar 2026 12:03:00 UTC (1,176 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.