$p1$: Better Prompt Optimization with Fewer Prompts

arXiv cs.LG / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes when prompt optimization (improving a system prompt without updating model weights) works well by decomposing performance variability into response stochasticity versus differences in system prompt quality.
  • It finds prompt optimization is effective when variance among system prompts is large, but it tends to fail when variability is dominated by generation randomness rather than system-prompt differences.
  • The authors show scaling to more user prompts can reduce the variance among system prompts—especially on heterogeneous datasets—thereby making optimization harder.
  • They introduce $p1$, a user prompt filtering method that selects a small set of user prompts with high variance across candidate system prompts to better separate good from bad system prompts.
  • Experiments on reasoning benchmarks indicate $p1$ substantially improves prompt optimization and achieves strong generalization, including a system prompt trained using only two prompts on AIME 24 that transfers to other reasoning benchmarks.

Abstract

Prompt optimization improves language models without updating their weights by searching for a better system prompt, but its effectiveness varies widely across tasks. We study what makes a task amenable to prompt optimization. We show that the reward variance across different system prompts can be decomposed into two components: variance among responses, which captures generation stochasticity, and variance among system prompts, which captures differences in system prompt quality. Prompt optimization succeeds when variance among system prompts is sufficiently large, but fails when variance among responses dominates the variance of the system prompts. Surprisingly, we further show that scaling to more user prompts can hurt optimization by reducing variance among system prompts, especially on heterogeneous datasets where different user prompts favor different system prompts. Motivated by this insight, we propose p1, a simple user prompt filtering method that selects a small subset of user prompts with high variance across candidate system prompts. This subset of user prompts allows one to distinguish a good system prompt from a bad one, making system optimization easier. Experiments on reasoning benchmarks show that p1 substantially improves prompt optimization over training on the full dataset and outperforms strong baselines such as GEPA. Notably, training on only two prompts from AIME 24 yields a system prompt that generalizes well to other reasoning benchmarks.