Deep Gaussian Processes for Functional Maps

arXiv stat.ML / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles function-on-function regression (learning mappings between functional spaces) and highlights limitations of existing methods in modeling complex nonlinear relationships and producing well-calibrated uncertainty under noisy, sparse, or irregular sampling.
  • It introduces Deep Gaussian Processes for Functional Maps (DGPFM), which applies a sequence of Gaussian-process-based linear and nonlinear transformations directly in function space using kernel integral transforms and GP conditional means.
  • A key implementation insight is that, with fixed evaluation locations, discrete approximations of kernel integral transforms simplify to functional integral transforms, enabling flexible transform designs without major structural changes.
  • For scalable probabilistic inference, DGPFM uses inducing points and whitening transformations within a variational learning framework.
  • Experiments on synthetic and real benchmarks report improved predictive accuracy and better uncertainty calibration compared with prior approaches.

Abstract

Learning mappings between functional spaces, also known as function-on-function regression, is a fundamental problem in functional data analysis with broad applications, including spatiotemporal forecasting, curve prediction, and climate modeling. Existing approaches often struggle to capture complex nonlinear relationships and/or provide reliable uncertainty quantification when data are noisy, sparse, or irregularly sampled. To address these challenges, we propose Deep Gaussian Processes for Functional Maps (DGPFM). Our method constructs a sequence of GP-based linear and nonlinear transformations directly in function space, leveraging kernel integral transforms, GP conditional means, and nonlinear activations sampled from Gaussian processes. A key insight enables a simplified and flexible implementation: under fixed evaluation locations, discrete approximations of kernel integral transforms reduce to direct functional integral transforms, allowing seamless integration of diverse transform designs. To support scalable probabilistic inference, we adopt inducing points and whitening transformations within a variational learning framework. Empirical results on both real-world and synthetic benchmark datasets demonstrate the advantages of DGPFM in terms of predictive accuracy and uncertainty calibration.