Folding Tensor and Sequence Parallelism for Memory-Efficient Transformer Training & Inference

arXiv cs.CL / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces Tensor and Sequence Parallelism (TSP), which folds tensor parallelism and sequence parallelism onto the same device axis to reduce both parameter and activation memory per device.
  • Unlike traditional approaches that use separate mesh dimensions for TP and SP, TSP assigns each rank both a weight shard and a token/sequence shard, shrinking memory overhead while using that shared axis.
  • The authors present two runtime schedules: a sequence-wise key/value exchange method for attention and a ring-based circulation of weight shards with local accumulation for gated MLPs.
  • TSP increases communication volume compared with simpler layouts, but the paper provides theoretical analysis and benchmarks showing it can outperform or match TP, SP, and TP+SP under memory-constrained and long-context settings.
  • The work frames TSP as a hardware-aware parallelism option that can complement other strategies like pipeline parallelism and expert (Mixture-of-Experts) parallelism for dense and MoE models.

Abstract

We present tensor and sequence parallelism (TSP), a parallel execution strategy that folds tensor parallelism and sequence parallelism onto a single device axis. In conventional multi-dimensional parallelism layouts, tensor parallelism (TP) shards model weights while sequence parallelism (SP) shards tokens, reducing per-device parameter or activation memory, respectively. Traditionally, each scheme is assigned its own mesh dimension. TSP instead assigns each rank both a weight shard and a sequence shard, reducing both parameter and activation memory along the same device axis. We implement this design with two runtime schedules. For attention, ranks iterate over broadcast parameter shards and reconstruct context through a sequence-wise key/value exchange. For gated MLPs, weight shards circulate in a ring while partial outputs accumulate locally. By sharding both weights and activations across the same devices, TSP trades additional communication volume for reduced memory overhead. We provide a theoretical communication and memory analysis, describe our implementation of TSP attention and gated MLP blocks, and benchmark TSP against TP, SP, and TP+SP. These results position TSP as a hardware-aware alternative for long-context and memory-constrained model training, and as a viable axis of parallelism in concert with existing parallelism schemes such as pipeline and expert parallelism for dense and mixture-of-expert models.