Meta-Learned Basis Adaptation for Parametric Linear PDEs

arXiv cs.LG / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a hybrid physics-informed framework for families of parametric linear PDEs that combines a meta-learned predictor (KAPI) with a least-squares corrector.
  • KAPI uses a shallow task-conditioned model to produce an interpretable, task-adaptive Gaussian basis geometry whose centers, widths, and activity patterns are generated from PDE parameters via a lightweight meta-network.
  • The second-stage corrector transfers the predictor’s basis geometry, adds a background basis, and computes the final solution using a one-shot physics-informed Extreme Learning Machine (PIELM)-style least-squares solve.
  • Experiments on four PDE families (diffusion, transport, advection–diffusion, and variable-speed transport) show that the predictor learns physics-aligned, localized basis placement and the corrector can improve accuracy by one or more orders of magnitude.
  • The method is compared against parametric PINNs, physics-informed DeepONet, and uniform-grid PIELM correctors, with results emphasizing the efficiency and interpretability of predictor-guided basis adaptation.

Abstract

We propose a hybrid physics-informed framework for solving families of parametric linear partial differential equations (PDEs) by combining a meta-learned predictor with a least-squares corrector. The predictor, termed \textbf{KAPI} (Kernel-Adaptive Physics-Informed meta-learner), is a shallow task-conditioned model that maps query coordinates and PDE parameters to solution values while internally generating an interpretable, task-adaptive Gaussian basis geometry. A lightweight meta-network maps PDE parameters to basis centers, widths, and activity patterns, thereby learning how the approximation space should adapt across the parametric family. This predictor-generated geometry is transferred to a second-stage corrector, which augments it with a background basis and computes the final solution through a one-shot physics-informed Extreme Learning Machine (PIELM)-style least-squares solve. We evaluate the method on four linear PDE families spanning diffusion, transport, mixed advection--diffusion, and variable-speed transport. Across these cases, the predictor captures meaningful physics through localized and transport-aligned basis placement, while the corrector further improves accuracy, often by one or more orders of magnitude. Comparisons with parametric PINNs, physics-informed DeepONet, and uniform-grid PIELM correctors highlight the value of predictor-guided basis adaptation as an interpretable and efficient strategy for parametric PDE solving.