From Feelings to Metrics: Understanding and Formalizing How Users Vibe-Test LLMs

arXiv cs.CL / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard LLM benchmark scores often miss real-world usefulness, motivating the need to understand informal “vibe-testing” used by practitioners.
  • It analyzes two real-world sources—a survey of user evaluation practices and in-the-wild model comparison posts—to characterize how vibe-testing is performed.
  • The authors formalize vibe-testing as a two-part process where users personalize both (1) what tasks/prompts are tested and (2) the subjective criteria used to judge outputs.
  • They present a proof-of-concept evaluation pipeline that generates personalized prompts and evaluates model outputs using user-aware, subjective measures.
  • Experiments on coding benchmarks show that personalized prompting plus user-aware evaluation can change which model users prefer, indicating that vibe-testing can be systematically studied and used to bridge benchmarks and experience.

Abstract

Evaluating LLMs is challenging, as benchmark scores often fail to capture models' real-world usefulness. Instead, users often rely on ``vibe-testing'': informal experience-based evaluation, such as comparing models on coding tasks related to their own workflow. While prevalent, vibe-testing is often too ad hoc and unstructured to analyze or reproduce at scale. In this work, we study how vibe-testing works in practice and then formalize it to support systematic analysis. We first analyze two empirical resources: (1) a survey of user evaluation practices, and (2) a collection of in-the-wild model comparison reports from blogs and social media. Based on these resources, we formalize vibe-testing as a two-part process: users personalize both what they test and how they judge responses. We then introduce a proof-of-concept evaluation pipeline that follows this formulation by generating personalized prompts and comparing model outputs using user-aware subjective criteria. In experiments on coding benchmarks, we find that combining personalized prompts and user-aware evaluation can change which model is preferred, reflecting the role of vibe-testing in practice. These findings suggest that formalized vibe-testing can serve as a useful approach for bridging benchmark scores and real-world experience.