Decoupled DiLoCo for Resilient Distributed Pre-training
arXiv cs.CL / 4/24/2026
📰 NewsModels & Research
Key Points
- The paper argues that SPMD-based distributed pre-training is fragile because tight accelerator coupling makes the whole run stall when any worker slows down or fails.
- It introduces Decoupled DiLoCo, which breaks lock-step synchronization by running multiple independent learners that perform local optimization and asynchronously send parameter fragments to a central synchronizer.
- The synchronizer aggregates updates while bypassing failed or straggling learners using a minimum quorum, an adaptive grace window, and dynamic token-weighted merging.
- The authors report improved training efficiency in failure-prone environments (tested with millions of simulated chips) with zero global downtime, while retaining competitive performance on text and vision tasks for both dense and mixture-of-experts models.
Related Articles

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA

Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
DeepSeek-V4 Runs on Huawei Ascend Chips at 85% Utilization — Here's What That Means for AI Infrastructure and Pricing
Dev.to