Inverse-Free Sparse Variational Gaussian Processes

arXiv stat.ML / 4/2/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the scalability bottleneck of sparse variational Gaussian processes by avoiding Cholesky-based computations that are poorly matched to low-precision, massively parallel hardware.
  • It proposes an improved, better-conditioned inverse-free variational bound and derives a matmul-only natural-gradient update rule for the auxiliary parameter to improve stability and convergence.
  • The authors add practical heuristics (e.g., step-size schedules and stopping criteria) so the optimisation routine can be integrated into existing SVGP workflows.
  • Experiments on regression and classification benchmarks show the method can act as a drop-in replacement for SVGP-based models (including deep GPs), achieving comparable performance and sometimes faster runtimes when tuned.

Abstract

Gaussian processes (GPs) offer appealing properties but are costly to train at scale. Sparse variational GP (SVGP) approximations reduce cost yet still rely on Cholesky decompositions of kernel matrices, ill-suited to low-precision, massively parallel hardware. While one can construct valid variational bounds that rely only on matrix multiplications (matmuls) via an auxiliary matrix parameter, optimising them with off-the-shelf first-order methods is challenging. We make the inverse-free approach practical by proposing a better-conditioned bound and deriving a matmul-only natural-gradient update for the auxiliary parameter, markedly improving stability and convergence. We further provide simple heuristics, such as step-size schedules and stopping criteria, that make the overall optimisation routine fit seamlessly into existing workflows. Across regression and classification benchmarks, we demonstrate that our method 1) serves as a drop-in replacement in SVGP-based models (e.g., deep GPs), 2) recovers similar performance to traditional methods, and 3) can be faster than baselines when well tuned.