Item

Automatic Peer Review Evaluation

Sadallah, Abdelrahman Atef Mohamed Ali
Supervisor
Department
Natural Language Processing
Embargo End Date
30/05/2025
Type
Thesis
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Peer review remains a cornerstone of scientific research, providing critical quality control and feedback that guide the spreading of scholarly contributions. Despite its foundational role, the peer review process is increasingly strained by a growing imbalance between the volume of manuscript submissions and the availability of qualified reviewers. This imbalance often leads to rushed, inconsistent, or low-quality reviews, compromising their utility for both authors and meta-reviewers. In this study, we investigate what makes peer reviews most beneficial to authors and explore how automated systems can support the review process by evaluating review quality. We first define the review feedback evaluation task and identify a set of key aspects that contribute to the utility of reviews, such as actionability, specificity, and verifiability. To enable research on this task, we construct a highquality hybrid dataset that includes both manually annotated reviews and a largescale set of reviews with synthetic labels generated by large language models (LLMs). This dataset, tailored specifically to assess review quality within the AI research domain, fills a critical gap in the resources available for automated peer-review evaluation. Leveraging this dataset, we fine-tune several smaller, open-weight language models to evaluate reviews based on the identified aspects. This approach emphasizes ethical AI principles by promoting the use of transparent, open-source models while demonstrating that fine-tuning smaller models can achieve performance comparable to or exceeding that of large, closed-source LLMs. We release the datasets and the codebase for our experiments: =HYPERLINK("https://github.com/bodasadallah/review_rewrite", "https://github.com/bodasadallah/reviewre_write")
Citation
Abdelrahman Atef Mohamed Ali Sadallah, “Automatic Peer Review Evaluation,” Master of Science thesis, Natural Language Processing, MBZUAI, 2025.
Source
Conference
Keywords
Peer Review, Evaluation, Resources, Automatic Evaluation, Peer Review Evaluation, Natural Language Application for Scientific Domain
Subjects
Source
Publisher
DOI
Full-text link