MedConclusion: A Benchmark for Biomedical Conclusion Generation from Structured Abstracts

arXiv cs.CL / 4/9/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • MedConclusion is introduced as a large-scale dataset of 5.7M PubMed structured abstracts designed to benchmark biomedical conclusion generation from structured evidence.
  • Each example links non-conclusion abstract sections to the original author-written conclusion, creating natural supervision for evidence-to-conclusion reasoning.
  • The dataset includes journal-level metadata (e.g., biomedical category and SJR) to support subgroup analyses across biomedical domains.
  • Initial experiments evaluate multiple LLMs with conclusion-focused vs summary-focused prompting and use both reference-based metrics and LLM-as-a-judge scoring.
  • The study reports that conclusion generation is behaviorally different from summary writing and that judge identity can significantly affect absolute evaluation scores.

Abstract

Large language models (LLMs) are widely explored for reasoning-intensive research tasks, yet resources for testing whether they can infer scientific conclusions from structured biomedical evidence remain limited. We introduce \textbf{MedConclusion}, a large-scale dataset of \textbf{5.7M} PubMed structured abstracts for biomedical conclusion generation. Each instance pairs the non-conclusion sections of an abstract with the original author-written conclusion, providing naturally occurring supervision for evidence-to-conclusion reasoning. MedConclusion also includes journal-level metadata such as biomedical category and SJR, enabling subgroup analysis across biomedical domains. As an initial study, we evaluate diverse LLMs under conclusion and summary prompting settings and score outputs with both reference-based metrics and LLM-as-a-judge. We find that conclusion writing is behaviorally distinct from summary writing, strong models remain closely clustered under current automatic metrics, and judge identity can substantially shift absolute scores. MedConclusion provides a reusable data resource for studying scientific evidence-to-conclusion reasoning. Our code and data are available at: https://github.com/Harvard-AI-and-Robotics-Lab/MedConclusion.