AI Navigate

RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • RbtAct is a novel approach that improves the actionability of AI-generated peer review feedback by using reviewer rebuttals as implicit supervision to learn which comments lead to concrete revisions.
  • The method introduces a new task called perspective-conditioned segment-level review feedback generation, requiring focused comments tailored to specific perspectives like experiments or writing.
  • A large dataset, RMR-75K, was created linking review segments to rebuttal segments, annotated with perspective labels and impact categories to represent author uptake.
  • The approach trains a Llama-3.1-8B-Instruct model using supervised fine-tuning followed by preference optimization based on rebuttal pairs, resulting in feedback that is more specific, actionable, and grounded.
  • Experiments involving human experts and LLM-based judges confirm that RbtAct outperforms strong baselines in generating useful and relevant review feedback.

Computer Science > Computation and Language

arXiv:2603.09723 (cs)
[Submitted on 10 Mar 2026]

Title:RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

View a PDF of the paper titled RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation, by Sihong Wu and 6 other authors
View PDF HTML (experimental)
Abstract:Large language models (LLMs) are increasingly used across the scientific workflow, including to draft peer-review reports. However, many AI-generated reviews are superficial and insufficiently actionable, leaving authors without concrete, implementable guidance and motivating the gap this work addresses. We propose RbtAct, which targets actionable review feedback generation and places existing peer review rebuttal at the center of learning. Rebuttals show which reviewer comments led to concrete revisions or specific plans, and which were only defended. Building on this insight, we leverage rebuttal as implicit supervision to directly optimize a feedback generator for actionability. To support this objective, we propose a new task called perspective-conditioned segment-level review feedback generation, in which the model is required to produce a single focused comment based on the complete paper and a specified perspective such as experiments and writing. We also build a large dataset named RMR-75K that maps review segments to the rebuttal segments that address them, with perspective labels and impact categories that order author uptake. We then train the Llama-3.1-8B-Instruct model with supervised fine-tuning on review segments followed by preference optimization using rebuttal derived pairs. Experiments with human experts and LLM-as-a-judge show consistent gains in actionability and specificity over strong baselines while maintaining grounding and relevance.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09723 [cs.CL]
  (or arXiv:2603.09723v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.09723
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Sihong Wu [view email]
[v1] Tue, 10 Mar 2026 14:30:55 UTC (2,541 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.