Resilient AI Supercomputer Networking using MRC and SRv6

arXiv cs.AI / 5/7/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that tail latency is the dominant bottleneck for synchronous large-scale AI pretraining, and proposes architectural changes to reduce disruptions.
  • It introduces MRC, an RDMA-based transport protocol that sprays traffic across multiple network paths and actively load-balances to avoid flow collisions.
  • It presents multi-plane Clos topologies to achieve high switch radix and redundancy, enabling two-tier network designs for training clusters exceeding 100K GPUs.
  • It adds static source-routing with SRv6 so MRC can route around failures autonomously, improving resilience during training.
  • The authors report production deployment experience with MRC and static SRv6 routing at OpenAI and Microsoft’s largest training clusters, where it helped training jobs continue despite many network failures.

Abstract

Tail latency dominates the performance of synchronous pretraining jobs when running at very large scales. We describe a three-pronged approach: (1) a new RDMA-based transport protocol, MRC, sprays across many paths and actively load-balances between them, eliminating the issue of flow collisions (2) the use of multi-plane Clos topologies to get the benefits of high switch radix and redundancy, allowing training clusters well over 100K GPUs to be built as two-tier topologies while increasing physical redundancy, and (3) the use of static source-routing using SRv6 to allow MRC the freedom to bypass failures by itself. We describe our experiences running MRC and static SRv6 routing in production in OpenAI and Microsoft's largest training clusters, where it has been used to train the latest frontier models. We demonstrate how MRC allows AI training jobs to ride out many network failures that previously would have interrupted training.