Evolution-Inspired Sample Competition for Deep Neural Network Optimization

arXiv cs.CV / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “Natural Selection (NS),” an evolution-inspired training method that models explicit competition among samples rather than treating all samples uniformly during optimization.
  • NS computes a per-sample “natural selection score” from group-wise prediction results (via composite image construction and inference) and uses these scores to dynamically reweight each sample’s loss.
  • The approach targets common training pathologies such as class-imbalance bias, under-learning hard examples, and over-reinforcing noisy samples by adapting each sample’s contribution.
  • Experiments across 12 public datasets covering four image classification tasks show NS improves training outcomes and is broadly applicable.
  • NS is designed to be architecture-agnostic and avoids task-specific assumptions, with the authors indicating that code will be released publicly.

Abstract

Conventional deep network training generally optimizes all samples under a largely uniform learning paradigm, without explicitly modeling the heterogeneous competition among them. Such an oversimplified treatment can lead to several well-known issues, including bias under class imbalance, insufficient learning of hard samples, and the erroneous reinforcement of noisy samples. In this work, we present \textit{Natural Selection} (NS), a novel evolution-inspired optimization method that explicitly incorporates competitive interactions into deep network training. Unlike conventional sample reweighting strategies that rely mainly on predefined heuristics or static criteria, NS estimates the competitive status of each sample in a group-wise context and uses it to adaptively regulate its training contribution. Specifically, NS first assembles multiple samples into a composite image and rescales it to the original input size for model inference. Based on the resulting predictions, a natural selection score is computed for each sample to characterize its relative competitive variation within the constructed group. These scores are then used to dynamically reweight the sample-wise loss, thereby introducing an explicit competition-driven mechanism into the optimization process. In this way, NS provides a simple yet effective means of moving beyond uniform sample treatment and enables more adaptive and balanced model optimization. Extensive experiments on 12 public datasets across four image classification tasks demonstrate the effectiveness of the proposed method. Moreover, NS is compatible with diverse network architectures and does not depend on task-specific assumptions, indicating its strong generality and practical potential. The code will be made publicly available.