| According to multiple posts on Twitter/X ICML has rejected all paper of reviewers who used LLMs for their reviews even though they chose the review track with no LLM use. What are your thoughts on this? Too harsh considering the limited precision of AI detection tools? It is the first time I see a major conferences taking harsh actions on LLM-generated reviews. [link] [comments] |
[D] ICML rejects papers of reviewers who used LLMs despite agreeing not to
Reddit r/MachineLearning / 3/18/2026
📰 NewsIdeas & Deep AnalysisIndustry & Market Moves
Key Points
- ICML reportedly rejected all papers reviewed by reviewers who used LLMs, even if those reviewers had selected a no-LLM track.
- The decision highlights concerns about the reliability of AI-detection tools and the fairness of penalizing reviewers for LLM usage.
- It appears to be the first instance of a major conference taking such a punitive stance against LLM-generated reviews.
- The information comes from social media posts and screenshots, so official confirmation from ICML may be pending.
- The move could influence future review practices and how researchers engage with AI assistance in the review process.
Related Articles

Manus、AIエージェントをデスクトップ化 ローカルPC上でファイルやアプリを直接操作可能にのサムネイル画像
Ledge.ai

The programming passion is melting
Dev.to

Building “The Sentinel” – AI Parametric Insurance at Guidewire DEVTrails
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA