Fine-Tuning Pre-Trained Code Models for AI-Generated Code Detection
arXiv cs.CL / 5/5/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- For Subtask-B, the system applies sandwich token packing, class-balanced loss, and multi-seed ensembling with test-time augmentation, achieving macro-F1 of 0.737 (Subtask-A) and 0.422 (Subtask-B).
Related Articles

Singapore's Fraud Frontier: Why AI Scam Detection Demands Regulatory Precision
Dev.to

How AI is Changing the Way We Code in 2026: The Shift from Syntax to Strategy
Dev.to

13 CLAUDE.md Rules That Make AI Write Modern PHP (Not PHP 5 Resurrected)
Dev.to

MCP annotations are a UX layer, not a security layer
Dev.to
From OOM to 262K Context: Running Qwen3-Coder 30B Locally on 8GB VRAM
Dev.to