Dynamic Regret for Online Regression in RKHS via Discounted VAW and Subspace Approximation
arXiv cs.LG / 4/29/2026
📰 NewsModels & Research
Key Points
- The paper studies online regression with squared loss in an RKHS under a dynamic regret criterion, comparing the learner to a time-varying function sequence.
- It derives dynamic regret bounds that depend on the comparator sequence’s path length measured in the RKHS norm.
- The proposed approach extends a finite-dimensional discounted Vovk–Azoury–Warmuth (VAW) method to the RKHS setting by using finite-dimensional subspace approximations.
- For a fixed subspace, the method runs a discounted VAW-based ensemble across a geometric grid of discount factors, while controlling extra error via uniform projection error of kernel sections.
- The authors introduce an orthogonal truncation framework for building RKHS subspaces from kernel feature expansions (including Mercer truncation and kernel-section subspaces), yielding regime-dependent bounds for Gaussian, analytic dot-product, and Matérn kernels.
Related Articles
LLMs will be a commodity
Reddit r/artificial

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

AI Voice Agents in Production: What Actually Works in 2026
Dev.to

How we built a browser-based AI Pathology platform
Dev.to