Topological Neural Tangent Kernel

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Topological Neural Tangent Kernel (TopoNTK), an infinite-width kernel theory for simplicial message passing that extends neural tangent kernels beyond pairwise-only graph structure.
  • TopoNTK incorporates lower and upper Hodge interactions, enabling it to distinguish simplicial complexes that share the same underlying graph but differ in filled simplices (higher-order topology).
  • It argues that the Hodge decomposition yields an interpretable learning geometry, where edge signals split into gradient-like, harmonic, and local circulation components.
  • The work establishes a topological variant of spectral bias: different components are learned at different rates based on the TopoNTK spectrum, with global harmonic modes typically learned more slowly.
  • The authors provide theoretical proofs (expressivity, Hodge alignment, spectral learning, stability) and validate performance on synthetic tasks and DBLP higher-order link prediction.

Abstract

Graph neural tangent kernels give a principled infinite-width theory for graph neural networks, but inherit a basic limitation of graph models: they see only pairwise structure. Many relational systems contain higher-order interactions that are more naturally represented by simplicial complexes. We introduce the Topological Neural Tangent Kernel (TopoNTK), an infinite-width kernel for simplicial message passing on edge features. TopoNTK combines lower Hodge interactions, capturing graph-like coupling through shared vertices, with upper Hodge interactions, capturing coupling through filled simplices. This makes the kernel sensitive to topology invisible to graph kernels, allowing complexes with the same graph but different filled simplices to induce different kernels. Beyond expressivity, the Hodge structure gives the kernel an interpretable learning geometry. Edge signals decompose into gradient-like, harmonic, and local circulation components, and the spectrum of the TopoNTK determines how quickly each component is learned. This yields a topological form of spectral bias: components aligned with large-eigenvalue modes are learned quickly, while global harmonic modes, retained through the residual channel, often lie at smaller eigenvalues and are learned more slowly. We prove expressivity, Hodge-alignment, spectral learning, and stability properties, and validate them on synthetic simplicial tasks and DBLP higher-order link prediction. The results show that topology is not merely extra structure; it can provide coordinates that make relational learning more faithful, interpretable, and effective.