CORE: Robust Out-of-Distribution Detection via Confidence and Orthogonal Residual Scoring
arXiv cs.AI / 3/20/2026
📰 NewsModels & Research
Key Points
- The paper identifies that OOD detection performance varies across architectures and datasets because logit-based methods rely only on classifier confidence while feature-based methods entangle confidence with membership in the full feature space.
- They show that penultimate features decompose into an orthogonal classifier-aligned subspace (confidence) and a residual subspace (what the classifier discards), with the residual carrying a class-specific in-distribution membership signal unseen by logit-based methods.
- CORE (COnfidence + REsidual) scores the two subspaces independently and combines them via normalized summation, leveraging their orthogonality to achieve more robust detection.
- Across five architectures and five benchmark configurations, CORE achieves competitive or state-of-the-art results, ranking first in three of five settings and delivering the highest grand average AUROC with negligible computational overhead.
- The approach improves reliability of OOD detection in real deployments by mitigating architecture- and dataset-specific failure modes without substantial extra cost.
Related Articles

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA

VerityFlow-AI: Engineering a Multi-Agent Swarm for Real-Time Truth-Validation and Deep-Context Media Synthesis
Dev.to
: [R] Sinc Reconstruction for LLM Prompts: Applying Nyquist-Shannon to the Specification Axis (275 obs, 97% cost reduction, open source)
Reddit r/MachineLearning