Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a large-scale benchmarking study of five LLMs for tabular missing-data imputation using zero-shot prompt engineering and comparing them against six state-of-the-art traditional imputation baselines.
  • Evaluations span 29 datasets (including nine synthetic sets) across missingness mechanisms MCAR, MAR, and MNAR and missing rates up to 20%, enabling more systematic cross-method comparisons than prior work.
  • Results show LLMs—especially Gemini 3.0 Flash and Claude 4.5 Sonnet—typically outperform traditional methods on real-world open-source datasets.
  • The study finds the LLM advantage is likely linked to pretraining-induced familiarity with domain-specific patterns, while traditional methods like MICE outperform LLMs on synthetic datasets, indicating LLMs rely more on semantic context than statistical reconstruction.
  • A key practical trade-off is identified: LLM-based imputation achieves higher quality but requires substantially greater computational time and monetary cost than classical approaches.

Abstract

Data imputation is a cornerstone technique for handling missing values in real-world datasets, which are often plagued by missingness. Despite recent progress, prior studies on Large Language Models-based imputation remain limited by scalability challenges, restricted cross-model comparisons, and evaluations conducted on small or domain-specific datasets. Furthermore, heterogeneous experimental protocols and inconsistent treatment of missingness mechanisms (MCAR, MAR, and MNAR) hinder systematic benchmarking across methods. This work investigates the robustness of Large Language Models for missing data imputation in tabular datasets using a zero-shot prompt engineering approach. To this end, we present a comprehensive benchmarking study comparing five widely used LLMs against six state-of-the-art imputation baselines. The experimental design evaluates these methods across 29 datasets (including nine synthetic datasets) under MCAR, MAR, and MNAR mechanisms, with missing rates of up to 20\%. The results demonstrate that leading LLMs, particularly Gemini 3.0 Flash and Claude 4.5 Sonnet, consistently achieve superior performance on real-world open-source datasets compared to traditional methods. However, this advantage appears to be closely tied to the models' prior exposure to domain-specific patterns learned during pre-training on internet-scale corpora. In contrast, on synthetic datasets, traditional methods such as MICE outperform LLMs, suggesting that LLM effectiveness is driven by semantic context rather than purely statistical reconstruction. Furthermore, we identify a clear trade-off: while LLMs excel in imputation quality, they incur significantly higher computational time and monetary costs. Overall, this study provides a large-scale comparative analysis, positioning LLMs as promising semantics-driven imputers for complex tabular data.

Large Language Models for Missing Data Imputation: Understanding Behavior, Hallucination Effects, and Control Mechanisms | AI Navigate