Adaptive Data Dropout: Towards Self-Regulated Learning in Deep Neural Networks

arXiv cs.LG / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Adaptive Data Dropout,” an approach that dynamically changes which training samples are used based on performance feedback rather than using a fixed data-reduction schedule.
  • By treating data selection as an adaptive, self-regulated process, the method increases or decreases data exposure in response to changes in training accuracy to balance exploration and consolidation during learning.
  • It introduces a lightweight stochastic online update mechanism to modulate the data dropout behavior during training.
  • Experiments on standard image classification benchmarks indicate improved training efficiency (fewer effective steps) while maintaining competitive accuracy versus static data dropout strategies.
  • The authors plan to release code, positioning adaptive data selection as a promising direction for more efficient and robust deep neural network training.

Abstract

Deep neural networks are typically trained by uniformly sampling large datasets across epochs, despite evidence that not all samples contribute equally throughout learning. Recent work shows that progressively reducing the amount of training data can improve efficiency and generalization, but existing methods rely on fixed schedules that do not adapt during training. In this work, we propose Adaptive Data Dropout, a simple framework that dynamically adjusts the subset of training data based on performance feedback. Inspired by self-regulated learning, our approach treats data selection as an adaptive process, increasing or decreasing data exposure in response to changes in training accuracy. We introduce a lightweight stochastic update mechanism that modulates the dropout schedule online, allowing the model to balance exploration and consolidation over time. Experiments on standard image classification benchmarks show that our method reduces effective training steps while maintaining competitive accuracy compared to static data dropout strategies. These results highlight adaptive data selection as a promising direction for efficient and robust training. Code will be released.