TAB-AUDIT: Detecting AI-Fabricated Scientific Tables via Multi-View Likelihood Mismatch
arXiv cs.CL / 3/23/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper TAB-AUDIT investigates detection of AI-generated fabricated scientific tables in empirical NLP papers and introduces the FabTab benchmark with 1,173 AI-generated and 1,215 human-authored papers.
- It identifies discriminative features, notably within-table mismatch, which captures the perplexity gap between a table's skeleton and its numerical content, to distinguish fabricated tables.
- A RandomForest model using these features significantly outperforms prior methods, achieving 0.987 AUROC in-domain and 0.883 AUROC out-of-domain.
- The findings position experimental tables as a critical forensic signal for detecting AI-generated scientific fraud and establish a new benchmark for future research.
Related Articles
5 Signs Your Consulting Firm Needs AI Agents (Not More Staff)
Dev.to
AgentDesk vs Hiring Another Consultant: A Cost Comparison
Dev.to
"Why Your AI Agent Needs a System 1"
Dev.to
When should we expect TurboQuant?
Reddit r/LocalLLaMA
AI as Your Customs Co-Pilot: Automating HS Code Chaos in Southeast Asia
Dev.to