Abstract
User-space Adaptive Bitrate (ABR) algorithms cannot see the transport layer signals that matter most, such as minimum RTT and instantaneous delivery rate, and they respond to network changes only after damage has already propagated to the playout buffer. We present eBandit, a framework that relocates both network monitoring and ABR algorithm selection into the Linux kernel using eBPF. A lightweight epsilon-greedy Multi-Armed Bandit (MAB) runs inside a sockops program, evaluating three ABR heuristics against a reward derived from live TCP metrics. On an adversarial synthetic trace eBandit achieves 416.3 \pm 4.9 cumulative QoE, outperforming the best static heuristic by 7.2\%. On 42 real-world sessions eBandit achieves a mean QoE per chunk of 1.241, the highest across all policies, demonstrating that kernel-resident bandit learning transfers to heterogeneous mobile conditions.