Online Quantile Regression for Nonparametric Additive Models

arXiv stat.ML / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a projected functional gradient descent (P-FGD) algorithm for online nonparametric additive quantile regression models using the pinball loss.
  • P-FGD extends functional stochastic gradient descent to quantile regression and is designed to avoid storing historical data while keeping per-step computation at O(J_t ln J_t).
  • It enables fast quantile prediction at time t with only O(J_t) computational time, positioning it as more efficient than commonly used RKHS-based online learning approaches.
  • Using a Hilbert space projection identity, the authors prove that the resulting online quantile estimator achieves the minimax optimal consistency rate O(t^{-2s/(2s+1)}), with s reflecting the smoothness of the quantile function.
  • The work also includes theoretical extensions to mini-batch learning, broadening applicability beyond pure online updates.

Abstract

This paper introduces a projected functional gradient descent algorithm (P-FGD) for training nonparametric additive quantile regression models in online settings. This algorithm extends the functional stochastic gradient descent framework to the pinball loss. An advantage of P-FGD is that it does not need to store historical data while maintaining O(J_t\ln J_t) computational complexity per step where J_t denotes the number of basis functions. Besides, we only need O(J_t) computational time for quantile function prediction at time t. These properties show that P-FGD is much better than the commonly used RKHS in online learning. By leveraging a novel Hilbert space projection identity, we also prove that the proposed online quantile function estimator (P-FGD) achieves the minimax optimal consistency rate O(t^{-\frac{2s}{2s+1}}) where t is the current time and s denotes the smoothness degree of the quantile function. Extensions to mini-batch learning are also established.