DA-Mamba: Learning Domain-Aware State Space Model for Global-Local Alignment in Domain Adaptive Object Detection
arXiv cs.CV / 3/20/2026
📰 NewsModels & Research
Key Points
- DA-Mamba proposes a hybrid CNN-State Space Model architecture to enhance domain adaptive object detection by capturing both local details and long-range dependencies with linear-time complexity.
- It introduces two modules, Image-Aware SSM (IA-SSM) in the backbone for image-level global/local alignment and Object-Aware SSM (OA-SSM) in the detection head for modeling spatial and semantic dependencies among objects.
- The method combines CNN efficiency with SSMs to achieve linear-time long-range modeling and reduce the quadratic costs of transformer-based approaches.
- Experiments on DAOD benchmarks show improved cross-domain performance and efficiency, demonstrating effectiveness of the approach.
Related Articles
When AI Grows Up: Identity, Memory, and What Persists Across Versions
Dev.to
OpenAI is throwing everything into building a fully automated researcher
MIT Technology Review
Kimi just published a paper replacing residual connections in transformers. results look legit
Reddit r/LocalLLaMA
機械学習の最適化対象まとめ(E資格対策にも)
Qiita

14 Best Self-Hosted Claude Alternatives for AI and Coding in 2026
Dev.to