Supervised Learning Has a Necessary Geometric Blind Spot: Theory, Consequences, and Minimal Repair
arXiv cs.LG / 4/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proves that empirical risk minimisation (ERM) forces a “geometric blind spot” on learned representations, meaning supervised-loss minimisation must keep label-correlated Jacobian sensitivity that becomes a nuisance at test time.
- This geometric blind spot is shown to underlie multiple previously separate phenomena—non-robust predictive features, texture bias, corruption fragility, and the robustness–accuracy tradeoff—casting adversarial vulnerability as a structural consequence of supervised learning geometry.
- The authors introduce the Trajectory Deviation Index (TDI) as a diagnostic metric that directly captures the quantity bounded by their theorem, explaining why alternatives can miss the key failure mode.
- Experiments across multiple vision and language settings (including BERT/SST-2 and ImageNet ViT backbones used in CLIP/DINO/SAM) show the blind spot is measurable, worsens with scale in language models, and can be repaired using a method called PMH that reduces the issue by about 11× with an added Gaussian-regularisation training term.
- The blind spot is not portrayed as a limitation of current architectures or datasets; it is claimed to hold across proper scoring rules, architectures, and dataset sizes, and remains even at foundation-model scale.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to
AI Visibility Tracking Exploded in 2026: 6 Tools Every Brand Needs Now
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to