I’m a PhD student working in AI/computer vision, and I’ve hit a frustrating wall with a project.
My supervisor asked me to improve the accuracy of a published paper. My first step has been to faithfully reproduce their results before trying any modifications. The issue is I can’t even match their reported baseline. The paper reports ~77% accuracy, but after multiple runs and careful tuning, I’m consistently getting around 73%.
I’ve double-checked what I can: implementation details, preprocessing, hyperparameters (as much as they’re described), and even small things like random seeds and evaluation protocols. I also reached out to the paper’s author to clarify parts of the paper not mentioned but haven’t received a response.
At this point, I’m unsure how to proceed. It’s hard to justify “improvements” when my baseline is already below theirs.
Has anyone here dealt with this kind of reproducibility gap? How did you handle it especially when key details might be missing or authors are unresponsive? Any practical advice would be really appreciated.
[link] [comments]




