How strongly do you believe LLM judges on the for the ML papers?? [D]

Reddit r/MachineLearning / 4/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post asks readers how strongly they believe LLMs (large language model judges) should be trusted to evaluate ML papers.
  • The discussion highlights a contrast between commenters who focus on methodological gaps like missing ablations and those who provide more substantive critiques.
  • The author is seeking perspectives on whether LLM-based judgment aligns with human expectations of rigor and relevance in ML research evaluation.

I'm curious about your thoughts on these,

as far as I've seen most of the comments are nitpicking about "missing ablations" while some comments seem to be relevant.

submitted by /u/BetterbeBattery
[link] [comments]