A Universal Reproducing Kernel Hilbert Space from Polynomial Alignment and IMQ Distance

arXiv cs.LG / 5/6/2026

💬 OpinionModels & Research

Key Points

  • The paper proposes the “Yat” kernel, a rational hidden-unit kernel of the form \(k_{b,\varepsilon}(\mathbf{w},\mathbf{x})=((\mathbf{w}^\top\mathbf{x}+b)^2)/(\|\mathbf{x}-\mathbf{w}\|^2+\varepsilon)\) with \(b\ge 0\) and \(\varepsilon>0\), and shows it is PSD for \(b\ge 0\).
  • For \(b>0\), the Yat kernel is shown to dominate a scaled inverse-multiquadric (IMQ) kernel in the Loewner order, implying universality, characteristicness, and strict positive definiteness on any compact domain.
  • The polynomial numerator introduces non-radial alignment mechanisms that are not captured by finite IMQ expansions, including a far-field directional trace term proportional to \((\mathbf{u}^\top\mathbf{w})^2\).
  • The authors provide an algebraic construction demonstrating that an IMQ atom can be recovered exactly using three positive-bias Yat atoms via a second finite difference in the bias, with sharp equalities at three points in every dimension.
  • A trained shared-\((b,\varepsilon)\) Yat layer is interpreted as a finite learned-center expansion inside a fixed universal characteristic RKHS, with closed-form RKHS norms and an explicit diagonal term that supports a Rademacher generalization bound.

Abstract

We introduce the Yat kernel
k_{b,\varepsilon}(\mathbf{w},\mathbf{x})=\frac{(\mathbf{w}^\top\mathbf{x}+b)^2}{\|\mathbf{x}-\mathbf{w}\|^2+\varepsilon},\qquad b\ge 0,\ \varepsilon>0,
a rational hidden-unit primitive whose units are Mercer sections over a shared input/weight space. For b\ge 0 the kernel is PSD; for b>0 it dominates a scaled inverse-multiquadric (IMQ) in the Loewner order, yielding fixed-kernel universality, characteristicness, and strict positive definiteness on every compact domain. The polynomial numerator opens nonradial alignment channels absent from finite IMQ expansions, witnessed by the directional far-field trace T_\infty g_\varepsilon(\cdot;\mathbf{w},b)(\mathbf{u})=(\mathbf{u}^\top\mathbf{w})^2. Algebraically, a second finite difference in the bias recovers any IMQ atom from three positive-bias Yat atoms exactly, sharp at three atoms in every dimension at exact pointwise equality. A trained shared-(b,\varepsilon) Yat layer is therefore a finite learned-center expansion in a fixed universal characteristic RKHS, with closed-form norm \boldsymbol{\alpha}^\top\mathbf{K}\boldsymbol{\alpha} and explicit diagonal (\|\mathbf{x}\|^2+b)^2/\varepsilon driving a Rademacher generalization bound.