AI Navigate

[D] Can we stop glazing big labs and universities?

Reddit r/MachineLearning / 3/12/2026

💬 OpinionIdeas & Deep Analysis

Key Points

  • The post argues that research credit should be judged on merit rather than the author's affiliation, noting that strong researchers can come from non-elite universities or internships at large labs.
  • It emphasizes that large research organizations are not monolithic and that discoveries are not owned by any single institution based on where an author worked during their internship or affiliation.
  • It compares ML research culture to biology, highlighting openness to advances from many teams but warning about a feedback loop that can overemphasize outputs from big orgs.
  • It calls for fair attribution to prevent stifling innovation and ensure major advances from less-connected teams receive proper recognition.

I routinely see posts describing a paper with 15+ authors, the middlemost one being a student intern at Google, described in posts as "Google invents revolutionary new architecture..." Same goes for papers where some subset of the authors are at Stanford or MIT, even non-leads.

  1. Large research orgs aren't monoliths. There are good and weak researchers everywhere, even Stanford. Believe it or not, a postdoc at a non-elite university might indeed be a stronger and more influential researcher than a first-year graduate student at Stanford.

  2. It's a good idea to judge research on its own merit. Arguably one of the stronger aspects of the ML research culture is that advances can come from anyone, whereas in fields like biology most researchers and institutions are completely shut out from publishing in Nature, etc.

  3. Typically the first author did the majority of the work, and the last author supervised. Just because author N//2 did an internship somewhere elite doesn't mean that their org "owns" the discovery.

We all understand the benefits and strength of the large research orgs, but it's important to assign credit fairly. Otherwise, we end up in some sort of feedback loop where every crummy paper from a large orgs get undue attention, and we miss out on major advances from less well-connected teams. This is roughly the corner that biology backed itself into, and I'd hate to see this happen in ML research.

submitted by /u/kdfn
[link] [comments]