ELT-Bench-Verified: Benchmark Quality Issues Underestimate AI Agent Capabilities

arXiv cs.AI / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper re-examines ELT-Bench, the first benchmark for end-to-end ELT pipeline construction, arguing that earlier low agent success rates substantially underestimated real agent capability.
  • It attributes the underestimation to two main causes: upgraded LLMs that significantly improve extraction/loading and transformation, and benchmark quality problems that incorrectly penalize otherwise correct agent outputs.
  • The authors introduce an Auditor-Corrector approach that uses LLM-driven root-cause analysis plus human validation (with Fleiss' kappa = 0.85) to systematically audit and fix benchmark failures.
  • They find that many transformation failures stem from benchmark-attributable issues such as rigid evaluation scripts, ambiguous specifications, and incorrect ground truth.
  • Based on these findings, they release ELT-Bench-Verified with refined evaluation logic and corrected ground truth, and show large improvements driven entirely by benchmark correction, suggesting systemic quality issues across data-engineering benchmarks like text-to-SQL.

Abstract

Constructing Extract-Load-Transform (ELT) pipelines is a labor-intensive data engineering task and a high-impact target for AI automation. On ELT-Bench, the first benchmark for end-to-end ELT pipeline construction, AI agents initially showed low success rates, suggesting they lacked practical utility. We revisit these results and identify two factors causing a substantial underestimation of agent capabilities. First, re-evaluating ELT-Bench with upgraded large language models reveals that the extraction and loading stage is largely solved, while transformation performance improves significantly. Second, we develop an Auditor-Corrector methodology that combines scalable LLM-driven root-cause analysis with rigorous human validation (inter-annotator agreement Fleiss' kappa = 0.85) to audit benchmark quality. Applying this to ELT-Bench uncovers that most failed transformation tasks contain benchmark-attributable errors -- including rigid evaluation scripts, ambiguous specifications, and incorrect ground truth -- that penalize correct agent outputs. Based on these findings, we construct ELT-Bench-Verified, a revised benchmark with refined evaluation logic and corrected ground truth. Re-evaluating on this version yields significant improvement attributable entirely to benchmark correction. Our results show that both rapid model improvement and benchmark quality issues contributed to underestimating agent capabilities. More broadly, our findings echo observations of pervasive annotation errors in text-to-SQL benchmarks, suggesting quality issues are systemic in data engineering evaluation. Systematic quality auditing should be standard practice for complex agentic tasks. We release ELT-Bench-Verified to provide a more reliable foundation for progress in AI-driven data engineering automation.