JigsawRL: Assembling RL Pipelines for Efficient LLM Post-Training

arXiv cs.LG / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • JigsawRL is a cost-efficient RL post-training framework that introduces “Pipeline Multiplexing” as an additional dimension of RL parallelism to better utilize compute during LLM post-training.
  • It decomposes RL pipelines into Sub-Stage Graphs to reveal intra-stage and inter-worker imbalances that are obscured by traditional stage-level systems.
  • The framework mitigates multiplexing interference via dynamic resource allocation and improves utilization by migrating long-tail rollouts across workers.
  • It coordinates migrated rollouts by casting the problem as graph scheduling, solved with a look-ahead heuristic.
  • Experiments on 4–64 H100/A100 GPUs show throughput gains of up to 1.85× over Verl (synchronous RL) and 1.54× over StreamRL and AReaL (asynchronous RL), while supporting heterogeneous pipelines with acceptable latency trade-offs.

Abstract

We present JigsawRL, a cost-efficient framework that explores Pipeline Multiplexing as a new dimension of RL parallelism. JigsawRL decomposes each pipeline into a Sub-Stage Graph that exposes the intra-stage and inter-worker imbalance hidden by stage-level systems. On this abstraction, JigsawRL resolves multiplexing interference through dynamic resource allocation, eliminates fragmented utilization by migrating long-tail rollouts across workers, and formulates their coordination as a graph scheduling problem solved with a look-ahead heuristic. On 4-64 H100/A100 GPUs across different agentic RL pipelines and models, JigsawRL achieves up to 1.85x throughput over Verl on synchronous RL, 1.54x over StreamRL and AReaL on asynchronous RL, and supports heterogeneous pipelines with moderate latency trade-off.