Model-Agnostic Meta Learning for Class Imbalance Adaptation

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Hardness-Aware Meta-Resample (HAMR), a model-agnostic framework designed to improve NLP performance under class imbalance and varying data difficulty.
  • HAMR uses bi-level optimization to assign instance-level weights that emphasize truly hard samples and minority classes based on learned signals.
  • It also applies neighborhood-aware resampling to increase training emphasis not only on hard examples but on semantically similar neighbors.
  • The method is evaluated on six imbalanced datasets across multiple NLP tasks and domains (including biomedical, disaster response, and sentiment), where it shows consistent gains for minority classes and overall performance.
  • Ablation experiments indicate that HAMR’s components work synergistically, and the paper provides an implementation at the linked GitHub repository.

Abstract

Class imbalance is a widespread challenge in NLP tasks, significantly hindering robust performance across diverse domains and applications. We introduce Hardness-Aware Meta-Resample (HAMR), a unified framework that adaptively addresses both class imbalance and data difficulty. HAMR employs bi-level optimizations to dynamically estimate instance-level weights that prioritize genuinely challenging samples and minority classes, while a neighborhood-aware resampling mechanism amplifies training focus on hard examples and their semantically similar neighbors. We validate HAMR on six imbalanced datasets covering multiple tasks and spanning biomedical, disaster response, and sentiment domains. Experimental results show that HAMR achieves substantial improvements for minority classes and consistently outperforms strong baselines. Extensive ablation studies demonstrate that our proposed modules synergistically contribute to performance gains and highlight HAMR as a flexible and generalizable approach for class imbalance adaptation. Code is available at https://github.com/trust-nlp/ImbalanceLearning.