Optimization before Evaluation: Evaluation with Unoptimised Prompts Can be Misleading

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Current LLM evaluation frameworks typically use a single static prompt template for all models, which can diverge from real-world practice where prompts are optimized per model.
  • The paper studies prompt optimization (PO) and finds that it can substantially change the evaluation outcomes and the resulting model rankings.
  • Experiments on public academic benchmarks and internal industry benchmarks show that PO has a strong impact on which model appears best.
  • The authors conclude that practitioners should perform prompt optimization separately for each model during evaluation to make fair and task-relevant comparisons.
  • Overall, the study warns that evaluating with unoptimized prompts may lead to misleading conclusions about model quality.

Abstract

Current Large Language Model (LLM) evaluation frameworks utilize the same static prompt template across all models under evaluation. This differs from the common industry practice of using prompt optimization (PO) techniques to optimize the prompt for each model to maximize application performance. In this paper, we investigate the effect of PO towards LLM evaluations. Our results on public academic and internal industry benchmarks show that PO greatly affects the final ranking of models. This highlights the importance of practitioners performing PO per model when conducting evaluations to choose the best model for a given task.