AI Navigate

MolRGen: A Training and Evaluation Setting for De Novo Molecular Generation with Reasonning Models

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MolRGen introduces a scalable benchmark and dataset for training and evaluating reasoning-based LLMs on de novo molecular generation and property prediction.
  • It proposes a diversity-aware top-k score that captures both the quality and diversity of generated molecules, addressing evaluation gaps.
  • The work demonstrates that the setting can be used to train LLMs for molecular generation, including a 24B model using reinforcement learning, with analysis of performance and limitations.
  • The framework targets learning without ground-truth supervision, enabling de novo design without known high-scoring candidates and bridging gaps in existing approaches.

Abstract

Recent advances in reasoning-based large language models (LLMs) have demonstrated substantial improvements in complex problem-solving tasks. Motivated by these advances, several works have explored the application of reasoning LLMs to drug discovery and molecular design. However, most existing approaches either focus on evaluation or rely on training setups that require ground-truth labels, such as molecule pairs with known property modifications. Such supervision is unavailable in \textit{de novo} molecular generation, where the objective is to generate novel molecules that optimize a desirability score without prior knowledge of high-scoring candidates. To bridge this gap, we introduce MolRGen, a large-scale benchmark and dataset for training and evaluating reasoning-based LLMs on \textit{de novo} molecular generation. Our contributions are threefold. First, we propose a setting to evaluate and train models for \textit{de novo} molecular generation and property prediction. Second, we introduce a novel diversity-aware top-k score that captures both the quality and diversity of generated molecules. Third, we show our setting can be used to train LLMs for molecular generation, training a 24B LLM with reinforcement learning, and we provide a detailed analysis of its performance and limitations.