Learning Diagnostic Reasoning for Decision Support in Toxicology

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DeToxR, an RL-aligned LLM approach for emergency toxicology that fuses unstructured narrative accounts (e.g., paramedic notes and unreliable histories) with structured vital-sign data to support rapid diagnosis.
  • DeToxR performs multi-label prediction across 14 substance classes and uses Group Relative Policy Optimization (GRPO) to fine-tune an LLM while optimizing directly for clinical performance.
  • The method builds a reward signal from a multi-label agreement metric that penalizes both missed co-ingestions and hallucinated absent poisons, aiming to improve calibration under uncertainty.
  • Experiments show DeToxR significantly outperforms an unadapted base LLM and supervised baselines, and a clinical validation study reports improved poison identification versus an expert toxicologist (Micro-F1: 0.644 vs. 0.473).
  • The results suggest RL-aligned LLMs may be effective for high-stakes decision support where inputs are heterogeneous, noisy, and incomplete.

Abstract

Acute poly-substance intoxication requires rapid, life-saving decisions under substantial uncertainty, as clinicians must rely on incomplete ingestion details and nonspecific symptoms. Effective diagnostic reasoning in this chaotic environment requires fusing unstructured, non-medical narratives (e.g. paramedic scene descriptions and unreliable patient self-reports or known histories), with structured medical data like vital signs. While Large Language Models (LLMs) show potential for processing such heterogeneous inputs, they struggle in this setting, often underperforming simple baselines that rely solely on patient histories. To address this, we present DeToxR (Decision-support for Toxicology with Reasoning), the first adaptation of Reinforcement Learning (RL) to emergency toxicology. We design a robust data-fusion engine for multi-label prediction across 14 substance classes based on an LLM finetuned with Group Relative Policy Optimization (GRPO). We optimize the model's reasoning directly using a clinical performance reward. By formulating a multi-label agreement metric as the reward signal, the model is explicitly penalized for missing co-ingested substances and hallucinating absent poisons. Our model significantly outperforms its unadapted base LLM counterpart and supervised baselines. Furthermore, in a clinical validation study, the model indicates a clinical advantage by outperforming an expert toxicologist in identifying the correct poisons (Micro-F1: 0.644 vs. 0.473). These results demonstrate the potential of RL-aligned LLMs to synthesize unstructured pre-clinical narratives and structured medical data for decision support in high-stakes environments.