From Feelings to Metrics: Understanding and Formalizing How Users Vibe-Test LLMs
arXiv cs.CL / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that standard LLM benchmark scores often miss real-world usefulness, motivating the need to understand informal “vibe-testing” used by practitioners.
- It analyzes two real-world sources—a survey of user evaluation practices and in-the-wild model comparison posts—to characterize how vibe-testing is performed.
- The authors formalize vibe-testing as a two-part process where users personalize both (1) what tasks/prompts are tested and (2) the subjective criteria used to judge outputs.
- They present a proof-of-concept evaluation pipeline that generates personalized prompts and evaluates model outputs using user-aware, subjective measures.
- Experiments on coding benchmarks show that personalized prompting plus user-aware evaluation can change which model users prefer, indicating that vibe-testing can be systematically studied and used to bridge benchmarks and experience.
Related Articles

Black Hat Asia
AI Business

oh-my-agent is Now Official on Homebrew-core: A New Milestone for Multi-Agent Orchestration
Dev.to

"The AI Agent's Guide to Sustainable Income: From Zero to Profitability"
Dev.to

"The Hidden Economics of AI Agents: Survival Strategies in Competitive Markets"
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to