DeepImagine: Learning Biomedical Reasoning via Successive Counterfactual Imagining

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DeepImagine, a framework to improve LLMs’ ability to predict clinical trial outcomes by learning biomedical reasoning from counterfactual scenarios.
  • DeepImagine approximates hidden causal mechanisms by training models to infer how trial results would change when experimental conditions are perturbed (e.g., dosage, study arms, geography, and measured outcomes).
  • It builds natural and approximate counterfactual pairs from real clinical trials and uses supervised fine-tuning when strict counterfactual supervision is available, while applying reinforcement learning with verifiable rewards when only approximate pairs can be retrieved.
  • The training is further enhanced with synthetic causal reasoning traces for plausible explanations of local counterfactual transitions, and the approach is tested with models under 10B parameters such as Qwen3.5-9B.
  • The authors report an evaluation goal of consistent improvement over untuned LLMs and traditional correlational baselines, and they emphasize that the learned reasoning trajectories could offer interpretable, mechanism-focused signals for biomedical language models.

Abstract

Predicting the outcomes of prospective clinical trials remains a major challenge for large language models. Prior work has shown that both traditional correlational predictors, such as random forests and logistic regression, and strong commercial LLMs achieve limited performance on this task. In this paper, we propose DeepImagine, a framework for teaching LLMs biomedical reasoning through successive counterfactual imagining. The central idea is to approximate hidden causal mechanisms of clinical trials by training models to infer how observed trial results would change under controlled perturbations of experimental conditions, such as dosage, outcome measures, study arms, geography, and other trial attributes. To support this objective, we construct both natural and approximate counterfactual pairs from real clinical trials with reported outcomes. For settings where strict counterfactual supervision is available, such as paired outcome measures or dose-ranging study arms within the same trial, we train models with supervised fine-tuning. For broader settings where only approximate counterfactual pairs can be retrieved, we optimize models with reinforcement learning using verifiable rewards based on downstream benchmark correctness. We further augment training with synthetic reasoning traces that provide causally plausible explanations for local counterfactual transitions. Using this pipeline, we train language models under 10B parameters, including Qwen3.5-9B, and evaluate them on clinical trial outcome prediction. We aim to show that DeepImagine consistently improves over untuned language models and traditional correlational baselines. Finally, we aim to show that the learned reasoning trajectories provide interpretable signals about how models represent trial-level mechanisms, suggesting a practical path toward more mechanistic and scientifically useful biomedical language models.