Abstract
In federated contextual linear bandits, high data dimensionality incurs prohibitive computation and communication costs: local agents perform O(d^3)-time determinant computation and upload O(d^2) parameters, making existing algorithms unscalable, where d is the dimension of data. To relieve these scaling bottlenecks, this paper proposes Federated Sketch Contextual Linear Bandits (FSCLB). On the computation side, FSCLB uses SVD to indirectly obtain the determinant required for communication, eliminating the prohibitive cost of direct determinant calculation and cutting complexity from O(d^3) to O(l^2d) per round, where l< d is the sketch size. On the communication side, FSCLB introduces a double-sketch strategy that reduces both upload and download costs from O(d^2) to O(ld). Naively involving sketch update into federated contextual linear bandits can destroy the local increment and invalidate the asynchronous communication condition; FSCLB solves this by replacing the covariance matrix with the sketch matrix when deciding whether to communicate. Theoretically, FSCLB achieves a regret bound of \widetilde{O} ((\sqrt{d}+\sqrt{M\varepsilon_l})\sqrt{lT}), where \varepsilon_l is the upper bounded by the spectral tail of the covariance matrix; when l exceeds the rank of the covariance matrix, the bound simplifies to \widetilde{O}(\sqrt{ldT}), matching the optimal no-sketch regret. Experiments on both synthetic and real-world datasets show that FSCLB significantly reduces computational and communication costs by over 90 \% while sacrificing only a negligible amount of cumulative reward.