C-MORAL: Controllable Multi-Objective Molecular Optimization with Reinforcement Alignment for LLMs

arXiv cs.LG / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper introduces C-MORAL, a reinforcement learning post-training framework to make LLM-based molecular optimization controllable under multiple, competing drug-design constraints.
  • C-MORAL uses group-based relative optimization, aligns property scores across heterogeneous objectives, and applies continuous non-linear reward aggregation to improve training stability.
  • On the C-MuMOInstruct benchmark, C-MORAL achieves stronger performance than prior state-of-the-art methods in both in-domain and out-of-domain settings.
  • The reported Success Optimized Rate (SOR) reaches 48.9% for IND tasks and 39.5% for OOD tasks, while largely preserving scaffold similarity.
  • The authors provide publicly available code and models, enabling further evaluation and reuse of the approach for constrained multi-objective molecular design.

Abstract

Large language models (LLMs) show promise for molecular optimization, but aligning them with selective and competing drug-design constraints remains challenging. We propose C-Moral, a reinforcement learning post-training framework for controllable multi-objective molecular optimization. C-Moral combines group-based relative optimization, property score alignment for heterogeneous objectives, and continuous non-linear reward aggregation to improve stability across competing properties. Experiments on the C-MuMOInstruct benchmark show that C-Moral consistently outperforms state-of-the-art models across both in-domain and out-of-domain settings, achieving the best Success Optimized Rate (SOR) of 48.9% on IND tasks and 39.5% on OOD tasks, while largely preserving scaffold similarity. These results suggest that RL post-training is an effective way to align molecular language models with continuous molecular design objectives. Our code and models are publicly available at https://github.com/Rwigie/C-MORAL.