Can Large Language Models Detect Methodological Flaws? Evidence from Gesture Recognition for UAV-Based Rescue Operation Based on Deep Learning

arXiv cs.AI / 4/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests whether large language models can independently detect methodological flaws—especially data leakage—in published machine learning studies.
  • Using a gesture-recognition paper as a case study, the authors show the reported near-perfect accuracy is consistent with subject-level data leakage caused by non-independent training and test splits.
  • Six state-of-the-art LLMs, prompted to analyze the original paper without prior context and using the same prompt, all consistently flag the evaluation as flawed.
  • The LLMs’ explanations point to evidence such as overlapping learning curves, a minimal generalization gap, and unusually strong classification performance.
  • The authors conclude that LLMs may serve as complementary tools for scientific auditing and improving reproducibility, though the approach is not definitive on its own.

Abstract

Reliable evaluation is essential in machine learning research, yet methodological flaws-particularly data leakage-continue to undermine the validity of reported results. In this work, we investigate whether large language models (LLMs) can act as independent analytical agents capable of identifying such issues in published studies. As a case study, we analyze a gesture-recognition paper reporting near-perfect accuracy on a small, human-centered dataset. We first show that the evaluation protocol is consistent with subject-level data leakage due to non-independent training and test splits. We then assess whether this flaw can be detected independently by six state-of-the-art LLMs, each analyzing the original paper without prior context using an identical prompt. All models consistently identify the evaluation as flawed and attribute the reported performance to non-independent data partitioning, supported by indicators such as overlapping learning curves, minimal generalization gap, and near-perfect classification results. These findings suggest that LLMs can detect common methodological issues based solely on published artifacts. While not definitive, their consistent agreement highlights their potential as complementary tools for improving reproducibility and supporting scientific auditing.