Debate as Reward: A Multi-Agent Reward System for Scientific Ideation via RL Post-Training

arXiv cs.AI / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an RL post-training framework for LLM-based multi-agent scientific ideation that aims to reduce hallucinations and computational inefficiency from earlier prompting or complex multi-agent approaches.
  • It introduces a multi-agent reward function that acts as a “judge,” separating methodological validation from implementation details and using strict binary rewards to resist reward hacking.
  • Because the reward signal is sparse, the authors optimize using an unbiased variant of Group Relative Policy Optimization to avoid artificial length bias.
  • Training is grounded in ICLR-320, a dataset of problem-solution pairs curated from ICLR 2024 proceedings, and experiments show strong gains over prior baselines on expert-evaluated novelty, feasibility, and effectiveness metrics.

Abstract

Large Language Models (LLMs) have demonstrated potential in automating scientific ideation, yet current approaches relying on iterative prompting or complex multi-agent architectures often suffer from hallucination or computational inefficiency. A critical bottleneck in applying Reinforcement Learning (RL) to this open-ended domain is reward hacking -- where models exploit imperfect evaluation proxies to maximize scores without producing genuine scientific innovation. To address these limitations, we propose an RL framework explicitly tailored for high-quality scientific idea generation. We propose the first multi-agent reward function designed to serve as a judge, decoupling methodological validation from implementation details while providing strict binary rewards that are robust to reward hacking. To effectively optimize against this sparse signal, we utilize an unbiased variant of Group Relative Policy Optimization to mitigate artificial length bias. We grounded our training in ICLR-320, a curated dataset of problem-solution pairs extracted from ICLR 2024 proceedings. Experiments demonstrate that our framework significantly outperforms state-of-the-art baselines across expert-evaluated metrics of novelty, feasibility, and effectiveness.