CliffSearch: Structured Agentic Co-Evolution over Theory and Code for Scientific Algorithm Discovery

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CliffSearch proposes an agentic evolutionary framework for scientific algorithm discovery that treats each candidate as a structured artifact in either theory+code or code-only form.
  • It implements key evolutionary operations (pair selection, crossover, mutation, and review) as LLM agents, with reviewer judgments for correctness and originality acting as first-class selection gates.
  • The framework splits mutation into “exploration” (novelty via adjacent-domain ideas) and “correction” (evidence-guided repair using reviewer signals from theory, code, benchmark results, and runtime errors).
  • Experiments on three benchmark-grounded studies (transformer hyper-connection evolution, optimizer discovery on a fixed nanoGPT stack, and a native-optimizer ablation) show the loop can optimize benchmark metrics while emphasizing interpretability and correctness.
  • The authors provide full run artifacts, interactive visualizations, and exported best nodes, supporting reproducibility and controlled comparisons across search conditions.

Abstract

Scientific algorithm discovery is iterative: hypotheses are proposed, implemented, stress-tested, and revised. Current LLM-guided search systems accelerate proposal generation, but often under-represent scientific structure by optimizing code-only artifacts with weak correctness/originality gating. We present CliffSearch, an agentic evolutionary framework in which the core evolution operators (pair selection, crossover, mutation, and review) are implemented as LLM agents, and the loop is designed around three principles: (1) each node is a structured scientific artifact, instantiated in either theory+code or code_only mode, (2) reviewer judgments of correctness and originality are first-class selection gates alongside optimization of the benchmark metric of interest, and (3) mutation is split into exploration and correction pathways with distinct objectives. Exploration mutation imports ideas from adjacent scientific domains to increase novelty, while correction mutation performs targeted evidence-guided repair using reviewer signals over theory, code, benchmark results, and runtime errors. We illustrate the framework on three benchmark-grounded studies: transformer hyper-connection evolution, optimizer discovery on a fixed nanoGPT stack, and a smaller native-optimizer ablation. Across these settings, the same loop supports explicit metric direction, reproducible persistence, and reviewer-gated comparison of discoveries under controlled search conditions. The result is a discovery workflow that prioritizes scientific interpretability and correctness while optimizing task metrics under controlled novelty constraints, rather than maximizing candidate throughput alone. Full run artifacts, interactive visualizations, and exported best nodes for the reported studies are available at https://cliffsearch.ai .