A Learning-Based Cooperative Coevolution Framework for Heterogeneous Large-Scale Global Optimization

arXiv cs.LG / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets heterogeneous large-scale global optimization (H-LSGO), where cooperative coevolution (CC) struggles because subproblems have different dimensions and landscape structures.
  • It introduces a Learning-Based Heterogeneous Cooperative Coevolution Framework (LH-CC) that casts optimizer choice as a Markov Decision Process and uses a meta-agent to adaptively select the best optimizer per subproblem.
  • The authors propose a flexible benchmark suite to create diverse H-LSGO instances for evaluating heterogeneous behavior.
  • Experiments on 3000-dimensional problems with complex coupling show LH-CC delivers better solution quality and computational efficiency than state-of-the-art baselines.
  • The framework demonstrates strong generalization across different instances, optimization horizons, and optimizer types, highlighting dynamic optimizer selection as a key strategy for H-LSGO.

Abstract

Cooperative Coevolution (CC) effectively addresses Large-Scale Global Optimization (LSGO) via decomposition but struggles with the emerging class of Heterogeneous LSGO (H-LSGO) problems arising from real-world applications, where subproblems exhibit diverse dimensions and distinct landscapes. The prevailing CC paradigm, relying on a fixed low-dimensional optimizer, often fails to navigate this heterogeneity. To address this limitation, we propose the Learning-Based Heterogeneous Cooperative Coevolution Framework (LH-CC). By formulating the optimization process as a Markov Decision Process, LH-CC employs a meta-agent to adaptively select the most suitable optimizer for each subproblem. We also introduce a flexible benchmark suite to generate diverse H-LSGO problem instances. Extensive experiments on 3000-dimensional problems with complex coupling relationships demonstrate that LH-CC achieves superior solution quality and computational efficiency compared to state-of-the-art baselines. Furthermore, the framework exhibits robust generalization across varying problem instances, optimization horizons, and optimizers. Our findings reveal that dynamic optimizer selection is a pivotal strategy for solving complex H-LSGO problems.