| According to multiple posts on Twitter/X ICML has rejected all paper of reviewers who used LLMs for their reviews even though they chose the review track with no LLM use. What are your thoughts on this? Too harsh considering the limited precision of AI detection tools? It is the first time I see a major conferences taking harsh actions on LLM-generated reviews. [link] [comments] |
[D] ICML rejects papers of reviewers who used LLMs despite agreeing not to
Reddit r/MachineLearning / 3/18/2026
📰 NewsIdeas & Deep AnalysisIndustry & Market Moves
Key Points
- ICML reportedly rejected all papers reviewed by reviewers who used LLMs, even if those reviewers had selected a no-LLM track.
- The decision highlights concerns about the reliability of AI-detection tools and the fairness of penalizing reviewers for LLM usage.
- It appears to be the first instance of a major conference taking such a punitive stance against LLM-generated reviews.
- The information comes from social media posts and screenshots, so official confirmation from ICML may be pending.
- The move could influence future review practices and how researchers engage with AI assistance in the review process.
Related Articles
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to

From infrastructure to AI: how Alibaba Cloud powers the global ambitions of Chinese companies
SCMP Tech
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to