SF-Mamba: Rethinking State Space Model for Vision
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- SF-Mamba presents a vision-focused Mamba with two main innovations: auxiliary patch swapping to enable bidirectional information flow under a unidirectional scan, and batch folding with periodic state resets to boost GPU parallelism.
- The approach is designed to deliver higher throughput and efficiency, outperforming state-of-the-art baselines across image classification, object detection, and instance/semantic segmentation at multiple model sizes.
- It addresses limitations of prior Mamba variants and ViTs by enabling more efficient interaction among patches without relying on quadratic complexity or heavy data rearrangements.
- The authors plan to release the source code after publication.
Related Articles
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA
Qwen3.5 Knowledge density and performance
Reddit r/LocalLLaMA
I think I made the best general use System Prompt for Qwen 3.5 (OpenWebUI + Web search)
Reddit r/LocalLLaMA