Adversarial Robustness of NTK Neural Networks
arXiv cs.LG / 4/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes how NTK (Neural Tangent Kernel) neural networks behave against adversarial attacks when used for nonparametric regression.
- It derives minimax-optimal convergence rates for adversarial regression over Sobolev function spaces.
- It shows that NTK neural networks trained with gradient flow and early stopping can achieve these optimal adversarial robustness rates.
- In the overfitting/interpolation regime, the study proves that the minimum-norm interpolating solution can be significantly vulnerable to adversarial perturbations.
Related Articles

Building a Local AI Agent (Part 2): Six UX and UI Design Challenges
Dev.to

We Built a DNS-Based Discovery Protocol for AI Agents — Here's How It Works
Dev.to

Your first business opportunity in 3 commands: /register_directory in @biznode_bot, wait for matches, then /my_pulse to view...
Dev.to

Building AI Evaluation Pipelines: Automating LLM Testing from Dataset to CI/CD
Dev.to

Function Calling Harness 2: CoT Compliance from 9.91% to 100%
Dev.to