PA-Net: Precipitation-Adaptive Mixture-of-Experts for Long-Tail Rainfall Nowcasting
arXiv cs.AI / 3/17/2026
📰 NewsModels & Research
Key Points
- The paper introduces PA-Net, a precipitation-adaptive Transformer framework whose computational budget is explicitly governed by rainfall intensity.
- Its core Precipitation-Adaptive MoE (PA-MoE) dynamically scales the number of activated experts per token according to local precipitation magnitude, focusing more capacity on the heavy-rain tail.
- It features a Dual-Axis Compressed Latent Attention mechanism that factorizes spatiotemporal attention with convolutional reduction to manage massive context lengths.
- An intensity-aware training protocol progressively amplifies learning signals from extreme-rainfall samples, with ERA5 experiments showing gains especially in heavy-rain and rainstorm regimes.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA