Abstract
Distributed online convex optimization (D-OCO) is a powerful paradigm for modeling distributed scenarios with streaming data. However, the communication cost between local learners and the central server is substantial in large-scale applications. To alleviate this bottleneck, we initiate the study of D-OCO with compressed communication. Firstly, to quantify the compression impact, we establish the \Omega(\delta^{-1/2}\sqrt{T}) and \Omega(\delta^{-1}\log{T}) lower bounds for convex and strongly convex loss functions, respectively, where \delta \in (0,1] is the compression ratio. Secondly, we propose an optimal algorithm, which enjoys regret bounds of O(\delta^{-1/2}\sqrt{T}) and O(\delta^{-1} \log T) for convex and strongly convex loss functions, respectively. Our method incorporates the error feedback mechanism into the Follow-the-Regularized-Leader framework to address the coupling between the compression error and the projection error. Furthermore, we employ the online compression strategy to mitigate the accumulated error arising from the bidirectional compression. Our online method has great generality, and can be extended to the offline stochastic setting via online-to-batch conversion. We establish convergence rates of O(\delta^{-1/2}T^{-1/2}) and O(\delta^{-1} T^{-1}) for convex and strongly convex loss functions, respectively, providing the first guarantees for distributed non-smooth optimization with compressed communication and domain constraints.