Evaluating AI Meeting Summaries with a Reusable Cross-Domain Pipeline

arXiv cs.CL / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a reusable, cross-domain evaluation pipeline for generative AI applications, demonstrated with AI meeting summaries and packaged as a public artifact derived from a dataset pipeline.
  • The approach modularizes the workflow into five stages—source intake, structured reference construction, candidate generation, structured scoring, and reporting—while treating both ground truth and evaluator outputs as typed, persisted artifacts.
  • Offline benchmarking on 114 meetings across city_council, private_data, and whitehouse_press_briefings creates 340 meeting-model pairs and 680 judge runs across GPT-4.1-mini, GPT-5-mini, and GPT-5.1.
  • Results show GPT-4.1-mini has the top mean accuracy (0.583), while GPT-5.1 leads in completeness (0.886) and coverage (0.942), with sign tests indicating no significant accuracy winner but significant retention gains for GPT-5.1.
  • A contrastive baseline and typed analysis highlight that whitehouse_press_briefings is especially accuracy-challenging due to frequent unsupported specifics, and a follow-up deployment indicates GPT-5.4 outperforms GPT-4.1 on all metrics with robust improvements on retention.

Abstract

We present a reusable evaluation pipeline for generative AI applications, instantiated for AI meeting summaries and released with a public artifact package derived from a Dataset Pipeline. The system separates reusable orchestration from task-specific semantics across five stages: source intake, structured reference construction, candidate generation, structured scoring, and reporting. Unlike standalone claim scorers, it treats both ground truth and evaluator outputs as typed, persisted artifacts, enabling aggregation, issue analysis, and statistical testing. We benchmark the offline loop on a typed dataset of 114 meetings spanning city_council, private_data, and whitehouse_press_briefings, producing 340 meeting-model pairs and 680 judge runs across gpt-4.1-mini, gpt-5-mini, and gpt-5.1. Under this protocol, gpt-4.1-mini achieves the highest mean accuracy (0.583), while gpt-5.1 leads in completeness (0.886) and coverage (0.942). Paired sign tests with Holm correction show no significant accuracy winner but confirm significant retention gains for gpt-5.1. A typed DeepEval contrastive baseline preserves retention ordering but reports higher holistic accuracy, suggesting that reference-based scoring may overlook unsupported-specifics errors captured by claim-grounded evaluation. Typed analysis identifies whitehouse_press_briefings as an accuracy-challenging domain with frequent unsupported specifics. A deployment follow-up shows gpt-5.4 outperforming gpt-4.1 across all metrics, with statistically robust gains on retention metrics under the same protocol. The system benchmarks the offline loop and documents, but does not quantitatively evaluate, the online feedback-to-evaluation path.