LLMSniffer: Detecting LLM-Generated Code via GraphCodeBERT and Supervised Contrastive Learning

arXiv cs.CL / 4/20/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces LLMSniffer, a framework for detecting LLM-generated code by fine-tuning GraphCodeBERT with a two-stage supervised contrastive learning approach.
  • It uses comment-removal preprocessing and an MLP classifier to improve the quality of the learned representations for code provenance detection.
  • On benchmark datasets GPTSniffer and Whodunit, LLMSniffer improves accuracy from 70% to 78% (and boosts F1 from 68% to 78%) on GPTSniffer, and from 91% to 94.65% (with F1 from 91% to 94.64%) on Whodunit.
  • The authors report t-SNE visual evidence that contrastive fine-tuning produces better-separated, more compact embeddings, supporting the effectiveness of their training strategy.
  • They release model checkpoints, datasets, code, and a live interactive demo to enable further research and replication.

Abstract

The rapid proliferation of Large Language Models (LLMs) in software development has made distinguishing AI-generated code from human-written code a critical challenge with implications for academic integrity, code quality assurance, and software security. We present LLMSniffer, a detection framework that fine-tunes GraphCodeBERT using a two-stage supervised contrastive learning pipeline augmented with comment removal preprocessing and an MLP classifier. Evaluated on two benchmark datasets - GPTSniffer and Whodunit - LLMSniffer achieves substantial improvements over prior baselines: accuracy increases from 70% to 78% on GPTSniffer (F1: 68% to 78%) and from 91% to 94.65% on Whodunit (F1: 91% to 94.64%). t-SNE visualizations confirm that contrastive fine-tuning yields well-separated, compact embeddings. We release our model checkpoints, datasets, codes and a live interactive demo to facilitate further research.