Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking
arXiv cs.LG / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that fixed post-hoc logit adjustments are insufficient for long-tailed settings because the optimal correction to rerank classes can vary across inputs rather than being a constant offset per class.
- It formulates Bayes-optimal reranking on top-k base-model candidates and shows the required residual correction decomposes into a classwise term (constant within a class) and a pairwise term that depends on the input and competing labels.
- The authors derive conditions under which a fixed offset can recover Bayes-optimal ordering (when residuals are purely classwise) and conditions where it cannot (when the same label pair implies conflicting ordering constraints across contexts).
- Based on the decomposition, the paper introduces REPAIR, a lightweight post-hoc reranker that combines shrinkage-stabilized classwise correction with a linear, competition-feature-driven pairwise component.
- Experiments across five benchmarks (covering image classification, species/scene recognition, and rare disease diagnosis) support the framework by showing when pairwise correction improves performance versus when classwise correction is enough.
Related Articles

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

The house asked me a question
Dev.to

Precision Clip Selection: How AI Suggests Your In and Out Points
Dev.to