AI Navigate

CALF: Communication-Aware Learning Framework for Distributed Reinforcement Learning

arXiv cs.AI / 3/16/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • CALF trains reinforcement learning policies under realistic network models during simulation to address delays, jitter, and packet loss in distributed deployments.
  • The framework demonstrates that explicitly modeling communication constraints improves real-world deployment performance and reduces the sim-to-real gap for Wi-Fi-like networks.
  • Empirical results across heterogeneous hardware show network-aware training yields robust performance under varying network conditions compared with network-agnostic baselines.
  • CALF complements existing sim-to-real strategies such as physics-based and visual domain randomisation by treating network conditions as a major transfer axis.
  • The work highlights network conditions as a key axis for robust distributed RL in edge-cloud environments and broad implications for practical deployment.

Abstract

Distributed reinforcement learning policies face network delays, jitter, and packet loss when deployed across edge devices and cloud servers. Standard RL training assumes zero-latency interaction, causing severe performance degradation under realistic network conditions. We introduce CALF (Communication-Aware Learning Framework), which trains policies under realistic network models during simulation. Systematic experiments demonstrate that network-aware training substantially reduces deployment performance gaps compared to network-agnostic baselines. Distributed policy deployments across heterogeneous hardware validate that explicitly modelling communication constraints during training enables robust real-world execution. These findings establish network conditions as a major axis of sim-to-real transfer for Wi-Fi-like distributed deployments, complementing physics and visual domain randomisation.