BioAlchemy: Distilling Biological Literature into Reasoning-Ready Reinforcement Learning Training Data

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing large-scale reasoning datasets for biology have poor alignment with current biological research topic distributions, which can hurt reasoning model performance on biology tasks.
  • It introduces BioAlchemy, a pipeline to extract diverse, verifiable biology question-answer pairs from biological research literature for reinforcement learning use.
  • The authors release BioAlchemy-345K, a dataset with 345K biology reasoning problems, and show that matching the dataset’s topic mix to modern biology improves reinforcement-learning outcomes.
  • They also present BioAlchemist-8B, an 8B reasoning model variant that achieves a 9.12% improvement over its base model on biology benchmarks.
  • The resulting model is made available on Hugging Face, enabling downstream researchers and teams to further build biology-focused reasoning systems.

Abstract

Despite the large corpus of biology training text, the impact of reasoning models on biological research generally lags behind math and coding. In this work, we show that biology questions from current large-scale reasoning datasets do not align well with modern research topic distributions in biology, and that this topic imbalance may negatively affect performance. In addition, we find that methods for extracting challenging and verifiable research problems from biology research text are a critical yet underdeveloped ingredient in applying reinforcement learning for better performance on biology research tasks. We introduce BioAlchemy, a pipeline for sourcing a diverse set of verifiable question-and-answer pairs from a scientific corpus of biology research text. We curate BioAlchemy-345K, a training dataset containing over 345K scientific reasoning problems in biology. Then, we demonstrate how aligning our dataset to the topic distribution of modern scientific biology can be used with reinforcement learning to improve reasoning performance. Finally, we present BioAlchemist-8B, which improves over its base reasoning model by 9.12% on biology benchmarks. These results demonstrate the efficacy of our approach for developing stronger scientific reasoning capabilities in biology. The BioAlchemist-8B model is available at: https://huggingface.co/BioAlchemy.