AVO: Agentic Variation Operators for Autonomous Evolutionary Search

arXiv cs.LG / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Agentic Variation Operators (AVO), a new approach to evolutionary search where coding agents autonomously perform variation (propose, repair, critique, and verify) instead of relying on fixed mutation/crossover and hand-designed heuristics.
  • AVO replaces a constrained LLM candidate-generation pipeline with a self-directed agent loop that can use lineage information, a domain knowledge base, and execution feedback to iteratively improve implementations.
  • Experiments focus on highly optimized AI kernels (multi-head attention) running continuous autonomous evolution for 7 days on NVIDIA Blackwell (B200) GPUs, finding kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5%.
  • The discovered kernel optimizations transfer to grouped-query attention with only ~30 minutes of further autonomous adaptation, achieving up to 7.0% gains over cuDNN and 9.3% over FlashAttention-4.
  • The authors argue AVO represents a step beyond “LLM-in-the-loop” evolutionary pipelines by upgrading the agent from candidate generator to a full variation operator capable of producing micro-architectural performance improvements.

Abstract

Agentic Variation Operators (AVO) are a new family of evolutionary variation operators that replace the fixed mutation, crossover, and hand-designed heuristics of classical evolutionary search with autonomous coding agents. Rather than confining a language model to candidate generation within a prescribed pipeline, AVO instantiates variation as a self-directed agent loop that can consult the current lineage, a domain-specific knowledge base, and execution feedback to propose, repair, critique, and verify implementation edits. We evaluate AVO on attention, among the most aggressively optimized kernel targets in AI, on NVIDIA Blackwell (B200) GPUs. Over 7 days of continuous autonomous evolution on multi-head attention, AVO discovers kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5% across the evaluated configurations. The discovered optimizations transfer readily to grouped-query attention, requiring only 30 minutes of additional autonomous adaptation and yielding gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4. Together, these results show that agentic variation operators move beyond prior LLM-in-the-loop evolutionary pipelines by elevating the agent from candidate generator to variation operator, and can discover performance-critical micro-architectural optimizations that produce kernels surpassing state-of-the-art expert-engineered attention implementations on today's most advanced GPU hardware.