REM-CTX: Automated Peer Review via Reinforcement Learning with Auxiliary Context

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • REM-CTX is a reinforcement-learning-based automated peer review system that goes beyond text-only inputs by incorporating auxiliary context such as correspondence-aware signals during review generation.
  • The method trains an 8B-parameter language model using Group Relative Policy Optimization (GRPO) and uses a multi-aspect quality reward plus two specialized correspondence rewards to improve alignment with auxiliary context.
  • Experiments across computer, biological, and physical sciences show REM-CTX achieves the best overall review quality among six baselines and outperforms systems using substantially larger commercial models.
  • Ablation and metric analyses indicate the two correspondence rewards are complementary, while training dynamics reveal the “criticism” dimension can be negatively correlated with other review metrics, implying reward structuring may matter.
  • Overall, the paper suggests reinforcement learning with explicit context-alignment objectives can substantially improve both quality and contextual grounding of generated peer reviews.

Abstract

Most automated peer review systems rely on textual manuscript content alone, leaving visual elements such as figures and external scholarly signals underutilized. We introduce REM-CTX, a reinforcement-learning system that incorporates auxiliary context into the review generation process via correspondence-aware reward functions. REM-CTX trains an 8B-parameter language model with Group Relative Policy Optimization (GRPO) and combines a multi-aspect quality reward with two correspondence rewards that explicitly encourage alignment with auxiliary context. Experiments on manuscripts across Computer, Biological, and Physical Sciences show that REM-CTX achieves the highest overall review quality among six baselines, outperforming other systems with substantially larger commercial models, and surpassing the next-best RL baseline across both quality and contextual grounding metrics. Ablation studies confirm that the two correspondence rewards are complementary: each selectively improves its targeted correspondence reward while preserving all quality dimensions, and the full model outperforms all partial variants. Analysis of training dynamics reveals that the criticism aspect is negatively correlated with other metrics during training, suggesting that future studies should group multi-dimension rewards for review generation.