CAMO: A Class-Aware Minority-Optimized Ensemble for Robust Language Model Evaluation on Imbalanced Data
arXiv cs.CL / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CAMO (Class-Aware Minority-Optimized), an ensemble method designed to improve language model evaluation and prediction under severe class imbalance by avoiding majority-class dominance.
- CAMO uses a hierarchical strategy that combines vote distributions, confidence calibration, and inter-model uncertainty to dynamically boost underrepresented classes and strengthen minority forecasts.
- Experiments on two unbalanced, domain-specific benchmarks (DIAR-AI/Emotion and ternary BEA 2025) evaluate CAMO against seven established ensemble approaches using eight language models (including both LLMs and SLMs) across zero-shot and fine-tuned settings.
- Results indicate CAMO achieves the best strict macro F1-score with refined models and that ensemble effectiveness depends on model properties, especially when model adaptation is applied.
- The authors claim CAMO is a domain-neutral framework for imbalanced categorization and a reliable approach for robust evaluation in real-world, skewed datasets.



