The Structured Output Benchmark: A Multi-Source Benchmark for Evaluating Structured Output Quality in Large Language Models

arXiv cs.CL / 4/29/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces SOB (Structured Output Benchmark), a multi-source benchmark for assessing how well large language models generate structured outputs (e.g., JSON) across native text, images, and audio conversations.
  • SOB standardizes the model inputs via a text-normalized representation across modalities to fairly isolate structured-output quality from raw vision or speech-processing performance.
  • The benchmark includes 5,000 text records from multi-hop QA, 209 image records from OCR-processed PDFs covering challenging document types, and 115 audio records from the AMI corpus, each requiring a JSON-schema-following answer grounded in source context.
  • Across 21 frontier and open-weight models, results show near-perfect schema compliance but substantially lower value correctness, with exact leaf-value match peaking at 83.0% (text), 67.2% (images), and 23.7% (audio), especially as context length increases.
  • The authors release the dataset, evaluation pipeline, and all related code to enable reproducible, source-agnostic structured-output evaluation.

Abstract

Large Language Models are increasingly being deployed to extract structured data from unstructured and semi-structured sources: parsing invoices, medical records, and converting PDF documents to database entries. Yet existing benchmarks for structured output generation either focus on schema compliance alone, or evaluate value correctness within a single source domain. We introduce SOB (The Structured Output Benchmark), a multi-source benchmark spanning three source modalities: native text, images, and audio conversations. All models receive a text-normalized representation of their context regardless of source modality; this deliberate design isolates structured-output capability from raw vision or speech-processing quality, ensuring a fair, source-agnostic comparison. Our benchmark comprises 5,000 text evaluation records derived from multi-hop QA drawn from a 25,091-record full corpus, 209 image records from OCR-processed PDFs across seven document types including multi-column layouts, dense tables, scanned historical documents, small-print text, and mathematical typesetting, and 115 audio records from the AMI corpus. Each record pairs a natural-language question with a JSON schema that the model must follow and a ground-truth answer verified against the source context. We evaluate 21 frontier and open-weight models across three source domains and seven metrics. Our results reveal a consistent pattern: models achieve near-perfect schema compliance, yet the best Value Accuracy, measured by exact leaf-value match, reaches only 83.0% on text, 67.2% on images, and 23.7% on audio, where longer context makes extraction substantially harder. We release the dataset, evaluation pipeline, and all related code.