Meta-Learned Basis Adaptation for Parametric Linear PDEs
arXiv cs.LG / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a hybrid physics-informed framework for families of parametric linear PDEs that combines a meta-learned predictor (KAPI) with a least-squares corrector.
- KAPI uses a shallow task-conditioned model to produce an interpretable, task-adaptive Gaussian basis geometry whose centers, widths, and activity patterns are generated from PDE parameters via a lightweight meta-network.
- The second-stage corrector transfers the predictor’s basis geometry, adds a background basis, and computes the final solution using a one-shot physics-informed Extreme Learning Machine (PIELM)-style least-squares solve.
- Experiments on four PDE families (diffusion, transport, advection–diffusion, and variable-speed transport) show that the predictor learns physics-aligned, localized basis placement and the corrector can improve accuracy by one or more orders of magnitude.
- The method is compared against parametric PINNs, physics-informed DeepONet, and uniform-grid PIELM correctors, with results emphasizing the efficiency and interpretability of predictor-guided basis adaptation.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to