Directional Confusions Reveal Divergent Inductive Biases Through Rate-Distortion Geometry in Human and Machine Vision
arXiv cs.CV / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that human and deep vision models can achieve similar accuracy while having systematically different error patterns—specifically in the direction of confusion rather than overall error rate.
- By comparing matched human and model responses across a natural-image categorization task with 12 perturbation types, the researchers quantify asymmetries in confusion matrices and explain them using a rate-distortion (RD) framework.
- The RD framework uses three geometric signatures (slope β, curvature κ, and efficiency AUC) to reveal inductive biases that accuracy alone cannot capture.
- Humans tend to show broad but weak asymmetries, while deep vision models produce sparser, stronger directional “collapses” of confusion.
- Robustness training can reduce global asymmetry, but it does not recreate the human-like graded breadth-strength profile, and mechanistic simulations suggest these asymmetry structures shift the RD frontier in opposite ways.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA