AI Navigate

Reflection in the Dark: Exposing and Escaping the Black Box in Reflective Prompt Optimization

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies four limitations of reflective automatic prompt optimization (APO) methods like GEPA, showing that black-box, label-free optimization can yield uninterpretable trajectories and systematic failures (e.g., GSM8K with a defective seed degrades accuracy from 23.81% to 13.50%).
  • It proposes VISTA, a multi-agent APO framework that decouples hypothesis generation from prompt rewriting, enabling semantically labeled hypotheses, parallel minibatch verification, and interpretable optimization traces.
  • A two-layer explore–exploit mechanism combining random restart and epsilon-greedy sampling is introduced to help escape local optima during optimization.
  • In experiments on GSM8K and AIME2025, VISTA recovers accuracy to 87.57% on the defective seed and consistently outperforms baselines across conditions.

Abstract

Automatic prompt optimization (APO) has emerged as a powerful paradigm for improving LLM performance without manual prompt engineering. Reflective APO methods such as GEPA iteratively refine prompts by diagnosing failure cases, but the optimization process remains black-box and label-free, leading to uninterpretable trajectories and systematic failure. We identify and empirically demonstrate four limitations: on GSM8K with a defective seed, GEPA degrades accuracy from 23.81% to 13.50%. We propose VISTA, a multi-agent APO framework that decouples hypothesis generation from prompt rewriting, enabling semantically labeled hypotheses, parallel minibatch verification, and interpretable optimization trace. A two-layer explore-exploit mechanism combining random restart and epsilon-greedy sampling further escapes local optima. VISTA recovers accuracy to 87.57% on the same defective seed and consistently outperforms baselines across all conditions on GSM8K and AIME2025.