AW-MoE: All-Weather Mixture of Experts for Robust Multi-Modal 3D Object Detection
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- AW-MoE combines Mixture of Experts with weather-robust multi-modal 3D object detection to address data distribution across different weather scenarios.
- It introduces Image-guided Weather-aware Routing (IWR) to classify weather using image features and select the top-K Weather-Specific Experts for each input.
- It also proposes Unified Dual-Modal Augmentation (UDMA) for synchronized LiDAR and 4D Radar data augmentation while preserving scene realism.
- Experimental results show around 15% improvement in adverse-weather performance over state-of-the-art methods with negligible inference overhead, and additional gains when integrated into established baselines.
- The authors plan to release the code publicly on GitHub.
Related Articles
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to
[P] Prompt optimization for analog circuit placement — 97% of expert quality, zero training data
Reddit r/MachineLearning
[R] Looking for arXiv endorser (cs.AI or cs.LG)
Reddit r/MachineLearning

I curated an 'Awesome List' for Generative AI in Jewelry- papers, datasets, open-source models and tools included!
Reddit r/artificial