eBandit: Kernel-Driven Reinforcement Learning for Adaptive Video Streaming

arXiv cs.AI / 4/13/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that user-space ABR algorithms miss critical transport-layer signals like minimum RTT and instantaneous delivery rate, causing delayed responses that worsen playout-buffer damage.
  • It introduces eBandit, a Linux-kernel framework using eBPF to move both network monitoring and ABR algorithm selection into the kernel, reducing reaction latency.
  • eBandit uses a lightweight epsilon-greedy multi-armed bandit (MAB) inside a sockops program to evaluate multiple ABR heuristics and compute rewards from live TCP metrics.
  • On adversarial synthetic traces, eBandit attains 416.3 ± 4.9 cumulative QoE and beats the best static heuristic by 7.2%, indicating robustness under hostile conditions.
  • On 42 real-world sessions, it achieves a mean QoE per chunk of 1.241, the highest among tested policies, suggesting the kernel-resident bandit approach generalizes to diverse mobile environments.

Abstract

User-space Adaptive Bitrate (ABR) algorithms cannot see the transport layer signals that matter most, such as minimum RTT and instantaneous delivery rate, and they respond to network changes only after damage has already propagated to the playout buffer. We present eBandit, a framework that relocates both network monitoring and ABR algorithm selection into the Linux kernel using eBPF. A lightweight epsilon-greedy Multi-Armed Bandit (MAB) runs inside a sockops program, evaluating three ABR heuristics against a reward derived from live TCP metrics. On an adversarial synthetic trace eBandit achieves 416.3 \pm 4.9 cumulative QoE, outperforming the best static heuristic by 7.2\%. On 42 real-world sessions eBandit achieves a mean QoE per chunk of 1.241, the highest across all policies, demonstrating that kernel-resident bandit learning transfers to heterogeneous mobile conditions.