Failure-Centered Runtime Evaluation for Deployed Trilingual Public-Space Agents
arXiv cs.AI / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces PSA-Eval, a failure-centered framework for runtime evaluation of deployed trilingual public-space agents, arguing that analysis should focus on failures rather than only input-output scores.
- PSA-Eval extends a conventional Question→Answer→Score pipeline into an evaluation workflow that tracks Question→Batch→Run→Score→Failure Case→Repair→Regression Batch, enabling failures to be traced, reviewed, repaired, and regression-tested.
- It uses trilingual equivalent inputs as controlled probes to detect group-level cross-language policy drift in real deployments.
- A pilot study on a deployed trilingual digital front-desk system (81 samples across 27 question groups) found high average performance (23.15/24) but also measurable cross-language score drift, including up to 9-point maximum drift.
- The results suggest that failure-centered runtime evaluation can reveal structured deployment issues that may be obscured by aggregate scoring metrics.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to