Generative Active Testing: Efficient LLM Evaluation via Proxy Task Adaptation

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Generative Active Testing (GAT) introduces an uncertainty-aware acquisition framework that uses LLMs as surrogates to guide sample selection for evaluating generative QA tasks.
  • The Statement Adaptation Module converts generative tasks into a pseudo-classification format to capture sample-level uncertainties across unlabeled candidates.
  • The zero-shot acquisition functions reduce estimation error by about 40% compared with traditional sampling baselines, enabling cost-effective benchmarking in domains like healthcare and biomedicine.
  • The approach addresses the cost and scalability challenges of developing new benchmarks for LLM evaluation by enabling more efficient task-specific testing.

Abstract

With the widespread adoption of pre-trained Large Language Models (LLM), there exists a high demand for task-specific test sets to benchmark their performance in domains such as healthcare and biomedicine. However, the cost of labeling test samples while developing new benchmarks poses a significant challenge, especially when expert annotators are required. Existing frameworks for active sample selection offer limited support for generative Question Answering tasks, where option dynamics can affect model decision boundaries. In this paper, we present Generative Active Testing (GAT), an uncertainty-aware acquisition framework leveraging LLMs as surrogates for informing the sample selection process. Using a novel Statement Adaptation Module, we modify generative tasks into a pseudo-classification format, enabling the capture of sample-level uncertainties across unlabeled candidates. Our zero-shot acquisition functions reduce estimation error by ~40% compared to traditional sampling baselines, offering a scalable solution for cost-effective model benchmarking.