Abstract
This paper introduces a projected functional gradient descent algorithm (P-FGD) for training nonparametric additive quantile regression models in online settings. This algorithm extends the functional stochastic gradient descent framework to the pinball loss. An advantage of P-FGD is that it does not need to store historical data while maintaining O(J_t\ln J_t) computational complexity per step where J_t denotes the number of basis functions. Besides, we only need O(J_t) computational time for quantile function prediction at time t. These properties show that P-FGD is much better than the commonly used RKHS in online learning. By leveraging a novel Hilbert space projection identity, we also prove that the proposed online quantile function estimator (P-FGD) achieves the minimax optimal consistency rate O(t^{-\frac{2s}{2s+1}}) where t is the current time and s denotes the smoothness degree of the quantile function. Extensions to mini-batch learning are also established.