PA-Net: Precipitation-Adaptive Mixture-of-Experts for Long-Tail Rainfall Nowcasting
arXiv cs.AI / 3/17/2026
📰 NewsModels & Research
Key Points
- The paper introduces PA-Net, a precipitation-adaptive Transformer framework whose computational budget is explicitly governed by rainfall intensity.
- Its core Precipitation-Adaptive MoE (PA-MoE) dynamically scales the number of activated experts per token according to local precipitation magnitude, focusing more capacity on the heavy-rain tail.
- It features a Dual-Axis Compressed Latent Attention mechanism that factorizes spatiotemporal attention with convolutional reduction to manage massive context lengths.
- An intensity-aware training protocol progressively amplifies learning signals from extreme-rainfall samples, with ERA5 experiments showing gains especially in heavy-rain and rainstorm regimes.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
[P] Vibecoded on a home PC: building a ~2700 Elo browser-playable neural chess engine with a Karpathy-inspired AI-assisted research loop
Reddit r/MachineLearning
Meet DuckLLM 1.0 My First Model!
Reddit r/LocalLLaMA
Since FastFlowLM added support for Linux, I decided to benchmark all the models they support, here are some results
Reddit r/LocalLLaMA
What measure do I use to compare nested models and non nested models in high dimensional survival analysis [D]
Reddit r/MachineLearning