LLMSniffer: Detecting LLM-Generated Code via GraphCodeBERT and Supervised Contrastive Learning
arXiv cs.CL / 4/20/2026
📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research
Key Points
- The paper introduces LLMSniffer, a framework for detecting LLM-generated code by fine-tuning GraphCodeBERT with a two-stage supervised contrastive learning approach.
- It uses comment-removal preprocessing and an MLP classifier to improve the quality of the learned representations for code provenance detection.
- On benchmark datasets GPTSniffer and Whodunit, LLMSniffer improves accuracy from 70% to 78% (and boosts F1 from 68% to 78%) on GPTSniffer, and from 91% to 94.65% (with F1 from 91% to 94.64%) on Whodunit.
- The authors report t-SNE visual evidence that contrastive fine-tuning produces better-separated, more compact embeddings, supporting the effectiveness of their training strategy.
- They release model checkpoints, datasets, code, and a live interactive demo to enable further research and replication.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.



