ResBM: Residual Bottleneck Models for Low-Bandwidth Pipeline Parallelism

arXiv cs.AI / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes ResBM (Residual Bottleneck Models), an encoder-decoder residual bottleneck architecture designed to be native to low-bandwidth pipeline parallelism for decentralized large-scale training.
  • ResBM places a residual bottleneck module across pipeline boundaries while keeping an explicit low-rank identity path, enabling true end-to-end training as part of the model parameters.
  • The authors report state-of-the-art 128x activation compression with no significant degradation in convergence rates.
  • They also claim ResBM introduces no significant memory or compute overhead, while remaining applicable to standard transformer-based architectures.
  • The work positions pipeline parallelism—still constrained by communication bandwidth—as the main remaining hurdle for decentralized training and offers a targeted architectural solution.

Abstract

Unlocking large-scale low-bandwidth decentralized training has the potential to utilize otherwise untapped compute resources. In centralized settings, large-scale multi-node training is primarily enabled by data and pipeline parallelism, two techniques that require ultra-high-bandwidth communication. While efficient methods now exist for decentralized data parallelism, pipeline parallelism remains the primary challenge. Recent efforts, such as Subspace Models (SM), have claimed up to 100x activation compression but rely on complex constrained optimization and diverge from true end-to-end training. In this paper, we propose a different approach, based on an architecture designed from the ground up to be native to low-bandwidth communication environments while still applicable to any standard transformer-based architecture. We call this architecture the Residual Bottleneck Model or ResBM, it introduces a residual encoder-decoder bottleneck module across pipeline boundaries that can be trained end-to-end as part of the model's parameters while preserving an explicit low-rank identity path. We show that ResBMs achieve state-of-the-art 128x activation compression without significant loss in convergence rates and without significant memory or compute overhead.