AI Navigate

SciMDR: Benchmarking and Advancing Scientific Multimodal Document Reasoning

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors propose a two-stage synthesize-and-reground framework to generate faithful reasoning data for scientific multimodal documents.
  • They build SciMDR, a large-scale dataset with 300K QA pairs across 20K papers, plus SciMDR-Eval for expert-annotated benchmarks.
  • Experiments show models fine-tuned on SciMDR achieve significant gains on scientific QA benchmarks, especially for complex document-level reasoning.
  • The work addresses the trade-off among scale, faithfulness, and realism in creating datasets for foundation-model training.

Abstract

Constructing scientific multimodal document reasoning datasets for foundation model training involves an inherent trade-off among scale, faithfulness, and realism. To address this challenge, we introduce the synthesize-and-reground framework, a two-stage pipeline comprising: (1) Claim-Centric QA Synthesis, which generates faithful, isolated QA pairs and reasoning on focused segments, and (2) Document-Scale Regrounding, which programmatically re-embeds these pairs into full-document tasks to ensure realistic complexity. Using this framework, we construct SciMDR, a large-scale training dataset for cross-modal comprehension, comprising 300K QA pairs with explicit reasoning chains across 20K scientific papers. We further construct SciMDR-Eval, an expert-annotated benchmark to evaluate multimodal comprehension within full-length scientific workflows. Experiments demonstrate that models fine-tuned on SciMDR achieve significant improvements across multiple scientific QA benchmarks, particularly in those tasks requiring complex document-level reasoning.