ICML 2026 - Heavy score variance among various batches? [D]

Reddit r/MachineLearning / 4/18/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post discusses reports of large variation in ICML 2026 paper scores across different reviewer batches, such as some batches having very few papers above 3.5 while others report averages around 3.75.
  • It questions whether these differences come from factors like reviewer strictness or domain differences between batches.
  • The author asks whether ICML has mechanisms to account for or correct for cross-batch scoring variance in the evaluation process.
  • Overall, the discussion frames the scoring inconsistency as a problem of fairness/consistency in peer review rather than presenting new ICML policy changes.

I've seen some people say in their batch very few papers have above 3.5 score, but then other reviewers say that most papers in their score have like 3.75 average.

Why is there so much difference? Is it because of difference in domain? One batch of papers just got harsher reviewers than others? Does ICML account for this?

submitted by /u/Specialist-Manager67
[link] [comments]