AI Navigate

LED: A Benchmark for Evaluating Layout Error Detection in Document Analysis

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • LED is a new benchmark for Document Layout Analysis that evaluates structural reasoning beyond surface-level accuracy metrics like IoU or mAP.
  • It defines eight error types (Missing, Hallucination, Size Error, Split, Merge, Overlap, Duplicate, and Misclassification) and provides rules and injection algorithms to realistically simulate these errors.
  • LED-Dataset and three evaluation tasks (document-level error detection, document-level error-type classification, and element-level error-type classification) enable fine-grained assessment of model understanding.
  • Experiments show state-of-the-art multimodal models reveal weaknesses across modalities and architectures, highlighting LED as an explainable diagnostic tool for robustness.
  • Overall, LED offers a unified and explainable benchmark for diagnosing the structural robustness and reasoning capabilities of document understanding models.

Abstract

Recent advances in Large Language Models (LLMs) and Large Multimodal Models (LMMs) have improved Document Layout Analysis (DLA), yet structural errors such as region merging, splitting, and omission remain persistent. Conventional overlap-based metrics (e.g., IoU, mAP) fail to capture such logical inconsistencies. To overcome this limitation, we propose Layout Error Detection (LED), a benchmark that evaluates structural reasoning in DLA predictions beyond surface-level accuracy. LED defines eight standardized error types (Missing, Hallucination, Size Error, Split, Merge, Overlap, Duplicate, and Misclassification) and provides quantitative rules and injection algorithms for realistic error simulation. Using these definitions, we construct LED-Dataset and design three evaluation tasks: document-level error detection, document-level error-type classification, and element-level error-type classification. Experiments with state-of-the-art multimodal models show that LED enables fine-grained and interpretable assessment of structural understanding, revealing clear weaknesses across modalities and architectures. Overall, LED establishes a unified and explainable benchmark for diagnosing the structural robustness and reasoning capability of document understanding models.