AI Navigate

Optimal uncertainty bounds for multivariate kernel regression under bounded noise: A Gaussian process-based dual function

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a tight, distribution-free uncertainty bound for multi-output kernel-based regression estimates, addressing limitations of existing bounds that rely on strong noise assumptions or struggle with scalability.
  • It employs an unconstrained, duality-based formulation that preserves the same structure as classic Gaussian process confidence bounds, allowing straightforward integration into downstream optimization pipelines.
  • The proposed bound generalizes many existing results and is demonstrated with an example inspired by quadrotor dynamics learning.
  • The work is positioned to improve safe learning-based control by providing reliable uncertainty quantification in practical, multi-output kernel methods.

Abstract

Non-conservative uncertainty bounds are essential for making reliable predictions about latent functions from noisy data--and thus, a key enabler for safe learning-based control. In this domain, kernel methods such as Gaussian process regression are established techniques, thanks to their inherent uncertainty quantification mechanism. Still, existing bounds either pose strong assumptions on the underlying noise distribution, are conservative, do not scale well in the multi-output case, or are difficult to integrate into downstream tasks. This paper addresses these limitations by presenting a tight, distribution-free bound for multi-output kernel-based estimates. It is obtained through an unconstrained, duality-based formulation, which shares the same structure of classic Gaussian process confidence bounds and can thus be straightforwardly integrated into downstream optimization pipelines. We show that the proposed bound generalizes many existing results and illustrate its application using an example inspired by quadrotor dynamics learning.