AI Navigate

Automated Self-Testing as a Quality Gate: Evidence-Driven Release Management for LLM Applications

arXiv cs.AI / 3/18/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper introduces an automated self-testing framework that enforces quality gates (PROMOTE/HOLD/ROLLBACK) across five empirically grounded dimensions: task success rate, research context preservation, P95 latency, safety pass rate, and evidence coverage to support evidence-based release decisions for LLM applications.
  • It demonstrates the approach with a longitudinal case study of an internally deployed multi-agent conversational AI system with marketing capabilities, spanning 38 evaluation runs across 20+ internal releases.
  • Results show the gate identified two rollback-grade builds in early runs, supported stable quality evolution over a four-week staging lifecycle, and indicated that evidence coverage is the primary severe-regression discriminator with runtime scaling predictably with suite size.
  • A human calibration study (n=60, two evaluators, LLM-as-judge cross-validation) reveals complementary multi-modal coverage between the judge and the gate, uncovering latency and routing issues not visible in response text while the judge surfaces content-quality failures, validating the multi-dimensional gate design; supplementary pseudocode and calibration artifacts are provided for replication.

Abstract

LLM applications are AI systems whose non-deterministic outputs and evolving model behavior make traditional testing insufficient for release governance. We present an automated self-testing framework that introduces quality gates with evidence-based release decisions (PROMOTE/HOLD/ROLLBACK) across five empirically grounded dimensions: task success rate, research context preservation, P95 latency, safety pass rate, and evidence coverage. We evaluate the framework through a longitudinal case study of an internally deployed multi-agent conversational AI system with specific marketing capabilities in active development, covering 38 evaluation runs across 20+ internal releases. The gate identified two ROLLBACK-grade builds in early runs and supported stable quality evolution over a four-week staging lifecycle while exercising persona-grounded, multi-turn, adversarial, and evidence-required scenarios. Statistical analysis (Mann-Kendall trends, Spearman correlations, bootstrap confidence intervals), gate ablation, and overhead scaling indicate that evidence coverage is the primary severe-regression discriminator and that runtime scales predictably with suite size. A human calibration study (n=60 stratified cases, two independent evaluators, LLM-as-judge cross-validation) reveals complementary multi-modal coverage: LLM-judge disagreements with the system gate (kappa=0.13) are attributable to structural failure modes such as latency violations and routing errors that are invisible in response text alone, while the judge independently surfaces content quality failures missed by structural checks, validating the multi-dimensional gate design. The framework, supplementary pseudocode, and calibration artifacts are provided to support AI-system quality assurance and independent replication.