From Experience to Skill: Multi-Agent Generative Engine Optimization via Reusable Strategy Learning
arXiv cs.AI / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- The paper argues that existing Generative Engine Optimization (GEO) approaches optimize each query in isolation and cannot reuse or transfer effective optimization strategies across tasks and engines.
- It reframes GEO as a strategy learning problem and proposes MAGEO, a multi-agent framework that uses coordinated planning, editing, and fidelity-aware evaluation to distill reusable, engine-specific “optimization skills.”
- To support controlled evaluation and causal attribution of changes, the authors introduce the Twin Branch Evaluation Protocol and the DSV-CF metric, which combines semantic visibility with attribution accuracy.
- They release MSME-GEO-Bench, a benchmark covering multiple scenarios and multiple engines based on real-world user queries, and show that MAGEO improves both content visibility and citation fidelity over heuristic baselines across three mainstream engines.
- Ablation studies indicate that engine-specific preference modeling and strategy reuse are key contributors to the performance gains, pointing to a scalable, learning-driven paradigm for more trustworthy GEO.
Related Articles

Autoencoders and Representation Learning in Vision
Dev.to
Every AI finance app wants your data. I didn’t trust that — so I built my own. Offline.
Dev.to

Control Claude with Just a URL. The Chrome Extension "Send to Claude" Is Incredibly Useful
Dev.to

Google Stitch 2.0: Senior-Level UI in Seconds, But Editing Still Breaks
Dev.to

Now Meta will track what employees do on their computers to train its AI agents
The Verge