FDARxBench: Benchmarking Regulatory and Clinical Reasoning on FDA Generic Drug Assessment

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • A new expert-curated benchmark, FDARxBench, evaluates document-grounded question-answering using FDA drug label documents to assess regulatory and clinical reasoning.
  • It was developed with FDA regulatory assessors and uses a multi-stage pipeline to generate high-quality, expert-curated QA examples spanning factual, multi-hop, and refusal tasks.
  • The evaluation framework tests both open-book and closed-book reasoning and uncovers substantial gaps in factual grounding, long-context retrieval, and safe refusal behavior of current models.
  • While motivated by FDA generic drug assessment needs, FDARxBench also provides a foundation for regulatory-grade evaluation of drug-label comprehension and LLM behavior.

Abstract

We introduce an expert curated, real-world benchmark for evaluating document-grounded question-answering (QA) motivated by generic drug assessment, using the U.S. Food and Drug Administration (FDA) drug label documents. Drug labels contain rich but heterogeneous clinical and regulatory information, making accurate question answering difficult for current language models. In collaboration with FDA regulatory assessors, we introduce FDARxBench, and construct a multi-stage pipeline for generating high-quality, expert curated, QA examples spanning factual, multi-hop, and refusal tasks, and design evaluation protocols to assess both open-book and closed-book reasoning. Experiments across proprietary and open-weight models reveal substantial gaps in factual grounding, long-context retrieval, and safe refusal behavior. While motivated by FDA generic drug assessment needs, this benchmark also provides a substantial foundation for challenging regulatory-grade evaluation of label comprehension. The benchmark is designed to support evaluation of LLM behavior on drug-label questions.