AI Navigate

SF-Mamba: Rethinking State Space Model for Vision

arXiv cs.CV / 3/18/2026

📰 NewsModels & Research

Key Points

  • SF-Mamba presents a vision-focused Mamba with two main innovations: auxiliary patch swapping to enable bidirectional information flow under a unidirectional scan, and batch folding with periodic state resets to boost GPU parallelism.
  • The approach is designed to deliver higher throughput and efficiency, outperforming state-of-the-art baselines across image classification, object detection, and instance/semantic segmentation at multiple model sizes.
  • It addresses limitations of prior Mamba variants and ViTs by enabling more efficient interaction among patches without relying on quadratic complexity or heavy data rearrangements.
  • The authors plan to release the source code after publication.

Abstract

The realm of Mamba for vision has been advanced in recent years to strike for the alternatives of Vision Transformers (ViTs) that suffer from the quadratic complexity. While the recurrent scanning mechanism of Mamba offers computational efficiency, it inherently limits non-causal interactions between image patches. Prior works have attempted to address this limitation through various multi-scan strategies; however, these approaches suffer from inefficiencies due to suboptimal scan designs and frequent data rearrangement. Moreover, Mamba exhibits relatively slow computational speed under short token lengths, commonly used in visual tasks. In pursuit of a truly efficient vision encoder, we rethink the scan operation for vision and the computational efficiency of Mamba. To this end, we propose SF-Mamba, a novel visual Mamba with two key proposals: auxiliary patch swapping for encoding bidirectional information flow under an unidirectional scan and batch folding with periodic state reset for advanced GPU parallelism. Extensive experiments on image classification, object detection, and instance and semantic segmentation consistently demonstrate that our proposed SF-Mamba significantly outperforms state-of-the-art baselines while improving throughput across different model sizes. We will release the source code after publication.