AI Navigate

Real-Time Trustworthiness Scoring for LLM Structured Outputs and Data Extraction

arXiv cs.CL / 3/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • CONSTRUCT introduces a real-time trustworthiness scoring method for LLM-structured outputs to identify outputs with higher likelihood of errors and guide human review.
  • The method scores trustworthiness at the level of individual fields within a structured output, enabling reviewers to focus on the parts that are wrong.
  • It works with any LLM, including black-box APIs without logprobs, and does not require labeled training data or custom model deployment.
  • The evaluation uses four datasets and shows higher precision/recall than other scoring methods, including assessments on models like Gemini 3 and GPT-5.
  • The work provides one of the first public benchmarks for LLM structured outputs with reliable ground-truth values, including support for complex outputs with nested JSON schemas.

Abstract

Structured Outputs from current LLMs exhibit sporadic errors, hindering enterprise AI efforts from realizing their immense potential. We present CONSTRUCT, a method to score the trustworthiness of LLM Structured Outputs in real-time, such that lower-scoring outputs are more likely to contain errors. This reveals the best places to focus limited human review bandwidth. CONSTRUCT additionally scores the trustworthiness of each field within a LLM Structured Output, helping reviewers quickly identify which parts of the output are wrong. Our method is suitable for any LLM (including black-box LLM APIs without logprobs such as reasoning models and Anthropic models), does not require labeled training data nor custom model deployment, and works for complex Structured Outputs with many fields of diverse types (including nested JSON schemas). We additionally present one of the first public LLM Structured Output benchmarks with reliable ground-truth values that are not full of mistakes. Over this four-dataset benchmark, CONSTRUCT detects errors from various LLMs (including Gemini 3 and GPT-5) with significantly higher precision/recall than other scoring methods.