Testing the Assumptions of Active Learning for Translation Tasks with Few Samples

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why active learning (AL) strategies for translation tasks underperform when only 100–500 labeled samples are available for training.
  • It finds that the usual AL objectives—selecting “informative” and “diverse” samples—do not show meaningful correlation with downstream translation test-set performance.
  • The research suggests that other factors, including the ordering of training samples and interactions with the model’s pre-training data, play a larger role in determining performance.
  • The authors conclude that effective future AL methods for very-low-data regimes must incorporate these non-traditional factors rather than relying primarily on informativeness/diversity heuristics.

Abstract

Active learning (AL) is a training paradigm for selecting unlabeled samples for annotation to improve model performance on a test set, which is useful when only a limited number of samples can be annotated. These algorithms often work by optimizing for the informativeness and diversity of the training data to be annotated. Recent work found that AL strategies fail to outperform random sampling on various language generation tasks when using 100-500 samples. To understand AL's poor performance when only using few samples, we investigate whether the core assumptions underlying AL strategies hold. We find that neither the informativeness nor diversity of the training data, which AL strategies optimize for, are correlated with test set performance. Instead, factors like the ordering of the training samples and interactions with pre-training data have a larger impact on performance. This suggests that future AL methods must take these factors into account in order to work with very few samples.