Transformer Neural Processes - Kernel Regression

arXiv stat.ML / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents Transformer Neural Process for Kernel Regression (TNP-KR), a scalable neural process model aimed at modeling posterior predictive distributions of stochastic processes.
  • It addresses efficiency limits of Neural Processes by introducing a Kernel Regression Block plus kernel-based attention bias to reduce attention bottlenecks.
  • Two new attention mechanisms are proposed: scan attention (SA), designed to be memory-efficient and can yield translation invariance when combined with the kernel bias, and deep kernel attention (DKA), a Performer-style method that reduces complexity to O(n_c).
  • The authors report the ability to run inference with 100K context points over more than 1M test points in under a minute on a single 24GB GPU.
  • Across benchmarks including meta regression, Bayesian optimization, image completion, and epidemiology, TNP-KR with DKA generally outperforms a Performer-based baseline, while TNP-KR with SA reaches state-of-the-art results.

Abstract

Neural Processes (NPs) are a rapidly evolving class of models designed to directly model the posterior predictive distribution of stochastic processes. Originally developed as a scalable alternative to Gaussian Processes (GPs), which are limited by O(n^3) runtime complexity, the most accurate modern NPs can often rival GPs but still suffer from an O(n^2) bottleneck due to their attention mechanism. We introduce the Transformer Neural Process - Kernel Regression (TNP-KR), a scalable NP featuring: (1) a Kernel Regression Block (KRBlock), a simple, extensible, and parameter efficient transformer block with complexity O(n_c^2 + n_c n_t), where n_c and n_t are the number of context and test points, respectively; (2) a kernel-based attention bias; and (3) two novel attention mechanisms: scan attention (SA), a memory-efficient scan-based attention that when paired with a kernel-based bias can make TNP-KR translation invariant, and deep kernel attention (DKA), a Performer-style attention that implicitly incoporates a distance bias and further reduces complexity to O(n_c). These enhancements enable both TNP-KR variants to perform inference with 100K context points on over 1M test points in under a minute on a single 24GB GPU. On benchmarks spanning meta regression, Bayesian optimization, image completion, and epidemiology, TNP-KR with DKA outperforms its Performer counterpart on nearly every benchmark, while TNP-KR with SA achieves state-of-the-art results.