< Explain other AI papers

Insights from the ICLR Peer Review and Rebuttal Process

Amir Hossein Kargaran, Nafiseh Nikeghbal, Jing Yang, Nedjma Ousidhoum

2025-11-24

Insights from the ICLR Peer Review and Rebuttal Process

Summary

This paper investigates how the peer review process works at a major machine learning conference, ICLR, by looking at a lot of data from the 2024 and 2025 submissions.

What's the problem?

As more and more research papers are submitted to conferences like ICLR, it's becoming harder to make sure the review process is fair, efficient, and leads to the best papers being published. We don't fully understand how reviewer scores change, how authors interact with reviewers, or how much influence reviewers have on each other.

What's the solution?

The researchers analyzed the scores papers received before and after authors had a chance to respond to reviews (the rebuttal phase). They also looked at how quickly reviews were submitted, how authors engaged with reviewers, and how much reviewers' opinions aligned. They even used AI to categorize the text of reviews and rebuttals to find common themes. This allowed them to see what factors most strongly predicted changes in a paper's score.

Why it matters?

This research provides valuable insights into how to improve the peer review system. It helps authors understand how to write effective rebuttals to improve their paper's chances, and it gives conference organizers ideas for making the review process fairer and more efficient, ultimately leading to better research being published.

Abstract

Peer review is a cornerstone of scientific publishing, including at premier machine learning conferences such as ICLR. As submission volumes increase, understanding the nature and dynamics of the review process is crucial for improving its efficiency, effectiveness, and the quality of published papers. We present a large-scale analysis of the ICLR 2024 and 2025 peer review processes, focusing on before- and after-rebuttal scores and reviewer-author interactions. We examine review scores, author-reviewer engagement, temporal patterns in review submissions, and co-reviewer influence effects. Combining quantitative analyses with LLM-based categorization of review texts and rebuttal discussions, we identify common strengths and weaknesses for each rating group, as well as trends in rebuttal strategies that are most strongly associated with score changes. Our findings show that initial scores and the ratings of co-reviewers are the strongest predictors of score changes during the rebuttal, pointing to a degree of reviewer influence. Rebuttals play a valuable role in improving outcomes for borderline papers, where thoughtful author responses can meaningfully shift reviewer perspectives. More broadly, our study offers evidence-based insights to improve the peer review process, guiding authors on effective rebuttal strategies and helping the community design fairer and more efficient review processes. Our code and score changes data are available at https://github.com/papercopilot/iclr-insights.