AgenticGEO: A Self-Evolving Agentic System for Generative Engine Optimization

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • Generative Engine Optimization (GEO) shifts optimization from ranking prominence to maximizing visibility/attribution within black-box LLM-generated summaries, but prior approaches often use static heuristics or single-prompt strategies that overfit and fail to adapt.
  • The paper introduces AgenticGEO, framing optimization as a content-conditioned control problem and using an evolving agentic framework to better handle unpredictable generative engine behavior.
  • AgenticGEO uses a MAP-Elites archive to evolve a diverse set of compositional strategies rather than relying on a fixed strategy pipeline.
  • To reduce expensive interaction feedback with black-box engines, it adds a Co-Evolving Critic surrogate that approximates engine feedback and guides both evolutionary search and inference-time planning.
  • Experiments across in-domain and cross-domain settings on two representative engines show state-of-the-art results, outperforming 14 baselines across 3 datasets, with robust transferability; code/model are released on GitHub.

Abstract

Generative search engines represent a transition from traditional ranking-based retrieval to Large Language Model (LLM)-based synthesis, transforming optimization goals from ranking prominence towards content inclusion. Generative Engine Optimization (GEO), specifically, aims to maximize visibility and attribution in black-box summarized outputs by strategically manipulating source content. However, existing methods rely on static heuristics, single-prompt optimization, or engine preference rule distillation that is prone to overfitting. They cannot flexibly adapt to diverse content or the changing behaviors of generative engines. Moreover, effectively optimizing these strategies requires an impractical amount of interaction feedback from the engines. To address these challenges, we propose AgenticGEO, a self-evolving agentic framework formulating optimization as a content-conditioned control problem, which enhances intrinsic content quality to robustly adapt to the unpredictable behaviors of black-box engines. Unlike fixed-strategy methods, AgenticGEO employs a MAP-Elites archive to evolve diverse, compositional strategies. To mitigate interaction costs, we introduce a Co-Evolving Critic, a lightweight surrogate that approximates engine feedback for content-specific strategy selection and refinement, efficiently guiding both evolutionary search and inference-time planning. Through extensive in-domain and cross-domain experiments on two representative engines, AgenticGEO achieves state-of-the-art performance and demonstrates robust transferability, outperforming 14 baselines across 3 datasets. Our code and model are available at: https://github.com/AIcling/agentic_geo.