AW-MoE: All-Weather Mixture of Experts for Robust Multi-Modal 3D Object Detection
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- AW-MoE combines Mixture of Experts with weather-robust multi-modal 3D object detection to address data distribution across different weather scenarios.
- It introduces Image-guided Weather-aware Routing (IWR) to classify weather using image features and select the top-K Weather-Specific Experts for each input.
- It also proposes Unified Dual-Modal Augmentation (UDMA) for synchronized LiDAR and 4D Radar data augmentation while preserving scene realism.
- Experimental results show around 15% improvement in adverse-weather performance over state-of-the-art methods with negligible inference overhead, and additional gains when integrated into established baselines.
- The authors plan to release the code publicly on GitHub.
Related Articles

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA
QwenDean-4B | fine-tuned SLM for UIGen; our first attempt, looking for feedback!
Reddit r/LocalLLaMA
acestep.cpp: portable C++17 implementation of ACE-Step 1.5 music generation using GGML. Runs on CPU, CUDA, ROCm, Metal, Vulkan
Reddit r/LocalLLaMA

**Introducing SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding**
Hugging Face Blog

Newest GPU server in the lab! 72gb ampere vram!
Reddit r/LocalLLaMA