BEAM: Bi-level Memory-adaptive Algorithmic Evolution for LLM-Powered Heuristic Design
arXiv cs.AI / 4/15/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces BEAM (Bi-level Memory-adaptive Algorithmic Evolution) to improve LLM-based hyper-heuristic design beyond single-function optimization by framing it as bi-level optimization.
- BEAM uses an exterior genetic algorithm layer to evolve high-level algorithmic structures with function placeholders, while an interior Monte Carlo Tree Search layer fills in those placeholders to realize candidate solvers.
- An Adaptive Memory module is added to support more complex code generation during the heuristic design process.
- To enable better evaluation and generation, the authors propose a Knowledge Augmentation (KA) pipeline and argue that starting from scratch or only from code templates limits LHH performance.
- Experiments across several optimization problems show BEAM significantly better results than prior LHHs, including a 37.84% reduction in optimality gap for CVRP hybrid algorithm design and new performance on Maximum Independent Set (MIS) tasks versus KaMIS.
Related Articles

HANDOVER + SYNC: multi-agent coordination without a central scheduler
Dev.to

Skills as invocation contracts, not code: how I keep review authority over agent work
Dev.to

Daily AI News — 2026-04-18
Dev.to

Custom Agent or Built-In AI? A Practical Checklist for Making the Right Choice
Dev.to
Coherence-First Non-Agentive Interaction System for Stabilizing Human–AI Cognitive Fields
Reddit r/artificial