MambaBack: Bridging Local Features and Global Contexts in Whole Slide Image Analysis
arXiv cs.CV / 4/20/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MambaBack, a hybrid Multiple Instance Learning (MIL) architecture for Whole Slide Image (WSI) analysis that combines local cellular feature extraction with global context modeling.
- It addresses key limitations of existing Mamba-based MIL approaches, including loss of 2D spatial locality from 1D flattening, weak modeling of fine-grained local structures, and high inference memory peaks on edge devices.
- MambaBack preserves 2D tile relationships via a Hilbert sampling strategy, improving spatial perception in the resulting 1D sequence representation.
- The model uses a hierarchical design with a 1D Gated CNN block (inspired by MambaOut) for local features and a BiMamba2 block for global context aggregation across multiple scales.
- An asymmetric chunking mechanism enables parallel training and chunking-streaming inference, reducing peak memory usage, and experiments on five datasets show improved performance over seven state-of-the-art methods with code released publicly.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to