AI Navigate

InfoMamba: An Attention-Free Hybrid Mamba-Transformer Model

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • InfoMamba introduces an attention-free hybrid architecture that replaces token-level self-attention with a linear filtering layer acting as a minimal-bandwidth global interface, paired with a selective recurrent stream.
  • A consistency boundary analysis is presented to characterize when diagonal short-memory SSMs can approximate causal attention and to identify remaining structural gaps.
  • The model uses information-maximizing fusion (IMF) to dynamically inject global context into SSM dynamics and employs a mutual-information-inspired objective to encourage complementary information usage.
  • Empirical results across classification, dense prediction, and non-vision tasks show InfoMamba outperforms strong Transformer and SSM baselines with near-linear scaling and competitive accuracy-efficiency trade-offs.

Abstract

Balancing fine-grained local modeling with long-range dependency capture under computational constraints remains a central challenge in sequence modeling. While Transformers provide strong token mixing, they suffer from quadratic complexity, whereas Mamba-style selective state-space models (SSMs) scale linearly but often struggle to capture high-rank and synchronous global interactions. We present a consistency boundary analysis that characterizes when diagonal short-memory SSMs can approximate causal attention and identifies structural gaps that remain. Motivated by this analysis, we propose InfoMamba, an attention-free hybrid architecture. InfoMamba replaces token-level self-attention with a concept bottleneck linear filtering layer that serves as a minimal-bandwidth global interface and integrates it with a selective recurrent stream through information-maximizing fusion (IMF). IMF dynamically injects global context into the SSM dynamics and encourages complementary information usage through a mutual-information-inspired objective. Extensive experiments on classification, dense prediction, and non-vision tasks show that InfoMamba consistently outperforms strong Transformer and SSM baselines, achieving competitive accuracy-efficiency trade-offs while maintaining near-linear scaling.