An Analysis of Active Learning Algorithms using Real-World Crowd-sourced Text Annotations

arXiv cs.LG / 4/28/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies active learning for text classification under realistic crowd-sourcing conditions, where labeling “oracles” can be wrong or may refuse to label.
  • Instead of simulating noisy annotators with ML models, the authors collect real crowd-sourced annotations from three benchmark text classification datasets.
  • Using these collected labels, the study evaluates eight common active learning techniques combined with deep neural networks through extensive experiments.
  • The findings analyze how each technique performs when annotators provide incorrect class labels or do not respond, offering guidance for deploying deep active learning systems in practice.
  • The dataset of crowd-sourced annotations is publicly released on GitHub for further research and benchmarking.

Abstract

Active learning algorithms automatically identify the most informative samples from large amounts of unlabeled data and tremendously reduce human annotation effort in inducing a machine learning model. In a conventional active learning setup, the labeling oracles are assumed to be infallible, that is, they always provide correct answers (in terms of class labels) to the queried unlabeled instances, which cannot be guaranteed in real-world applications. To this end, a body of research has focused on the development of active learning algorithms in the presence of imperfect / noisy oracles. Existing research on active learning with noisy oracles typically simulate the oracles using machine learning models; however, real-world situations are much more challenging, and using ML models to simulate the annotation patterns may not appropriately capture the nuances of real-world annotation challenges. In this research, we first collect annotations of text samples (from 3 benchmark text classification datasets) from crowd-sourced workers through a crowd-sourcing platform. We then conduct extensive empirical studies of 8 commonly used active learning techniques (in conjunction with deep neural networks) using the obtained annotations. Our analyses sheds light on the performance of these techniques under real-world challenges, where annotators can provide incorrect labels, and can also refuse to provide labels. We hope this research will provide valuable insights that will be useful for the deployment of deep active learning systems in real-world applications. The obtained annotations can be accessed at https://github.com/varuntotakura/al_rcta/.