Useful nonrobust features are ubiquitous in biomedical images
arXiv cs.LG / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates whether deep learning models for medical imaging rely on “nonrobust features” that are not human-interpretable yet predict class labels and are vulnerable to adversarial perturbations.
- Models trained primarily on nonrobust features can still achieve well-above-chance accuracy on five MedMNIST classification tasks, indicating these features are predictive in-distribution.
- Adversarial training shifts reliance toward more robust features, which reduces in-distribution accuracy but improves performance under controlled distribution shifts using MedMNIST-C.
- The findings reveal a practical robustness–accuracy trade-off for medical image classification: emphasizing nonrobust features can raise standard accuracy while harming out-of-distribution generalization, so methods should be matched to deployment needs.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to