AI Navigate

OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • OPENXRD is a new benchmarking framework designed to evaluate large language models (LLMs) and multimodal LLMs (MLLMs) specifically on crystallography question answering tasks.
  • The framework includes 217 expert-curated X-ray diffraction questions, tested under both closed-book and open-book conditions, where the open-book context is curated reference material refined by experts and GPT-4.5.
  • Benchmarking across 74 state-of-the-art models reveals that mid-sized LLMs (7B–70B parameters) benefit the most from contextual information, while very large models show saturation or interference effects.
  • Expert-reviewed context significantly boosts performance more than AI-generated content of comparable size, highlighting the importance of content quality over quantity.
  • OPENXRD provides a reproducible diagnostic tool to assess models' reasoning, knowledge integration, and sensitivity to guidance in scientific domains, paving the way for enhanced multimodal and retrieval-augmented crystallography AI systems.

Computer Science > Computation and Language

arXiv:2507.09155 (cs)
[Submitted on 12 Jul 2025 (v1), last revised 10 Mar 2026 (this version, v2)]

Title:OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering

View a PDF of the paper titled OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering, by Ali Vosoughi and 6 other authors
View PDF HTML (experimental)
Abstract:We introduce OPENXRD, a comprehensive benchmarking framework for evaluating large language models (LLMs) and multimodal LLMs (MLLMs) in crystallography question answering. The framework measures context assimilation, or how models use fixed, domain-specific supporting information during inference. The framework includes 217 expert-curated X-ray diffraction (XRD) questions covering fundamental to advanced crystallographic concepts, each evaluated under closed-book (without context) and open-book (with context) conditions, where the latter includes concise reference passages generated by GPT-4.5 and refined by crystallography experts. We benchmark 74 state-of-the-art LLMs and MLLMs, including GPT-4, GPT-5, O-series, LLaVA, LLaMA, QWEN, Mistral, and Gemini families, to quantify how different architectures and scales assimilate external knowledge. Results show that mid-sized models (7B--70B parameters) gain the most from contextual materials, while very large models often show saturation or interference and the largest relative gains appear in small and mid-sized models. Expert-reviewed materials provide significantly higher improvements than AI-generated ones even when token counts are matched, confirming that content quality, not quantity, drives performance. OPENXRD offers a reproducible diagnostic benchmark for assessing reasoning, knowledge integration, and guidance sensitivity in scientific domains, and provides a foundation for future multimodal and retrieval-augmented crystallography systems.
Comments:
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
MSC classes: 68T50, 68T07
Cite as: arXiv:2507.09155 [cs.CL]
  (or arXiv:2507.09155v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2507.09155
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Ali Vos [view email]
[v1] Sat, 12 Jul 2025 06:25:22 UTC (731 KB)
[v2] Tue, 10 Mar 2026 04:06:47 UTC (1,274 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.