AI Navigate

MaterialFigBENCH: benchmark dataset with figures for evaluating college-level materials science problem-solving abilities of multimodal large language models

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MaterialFigBench is a benchmark dataset designed to evaluate multimodal large language models on university-level materials science problems that require accurate interpretation of figures such as phase diagrams and diffusion patterns.
  • The dataset comprises 137 free-response problems adapted from standard materials science textbooks, spanning topics including crystal structures, mechanical properties, diffusion, phase diagrams, phase transformations, and electronic properties.
  • To mitigate ambiguity in reading numerical values from images, expert-defined answer ranges are provided where appropriate.
  • The paper evaluates several state-of-the-art multimodal LLMs, including ChatGPT and GPT models accessed through OpenAI APIs, and analyzes performance across problem categories and model versions.
  • Findings show that while overall accuracy improves with model updates, current LLMs still struggle with genuine visual understanding and quantitative interpretation of materials science figures, with many correct answers arising from memorized domain knowledge rather than image reading; the benchmark highlights weaknesses in visual reasoning, numerical precision, and significant-digit handling, while also identifying problem types where performance has improved.

Abstract

We present MaterialFigBench, a benchmark dataset designed to evaluate the ability of multimodal large language models (LLMs) to solve university-level materials science problems that require accurate interpretation of figures. Unlike existing benchmarks that primarily rely on textual representations, MaterialFigBench focuses on problems in which figures such as phase diagrams, stress-strain curves, Arrhenius plots, diffraction patterns, and microstructural schematics are indispensable for deriving correct answers. The dataset consists of 137 free-response problems adapted from standard materials science textbooks, covering a broad range of topics including crystal structures, mechanical properties, diffusion, phase diagrams, phase transformations, and electronic properties of materials. To address unavoidable ambiguity in reading numerical values from images, expert-defined answer ranges are provided where appropriate. We evaluate several state-of-the-art multimodal LLMs, including ChatGPT and GPT models accessed via OpenAI APIs, and analyze their performance across problem categories and model versions. The results reveal that, although overall accuracy improves with model updates, current LLMs still struggle with genuine visual understanding and quantitative interpretation of materials science figures. In many cases, correct answers are obtained by relying on memorized domain knowledge rather than by reading the provided images. MaterialFigBench highlights persistent weaknesses in visual reasoning, numerical precision, and significant-digit handling, while also identifying problem types where performance has improved. This benchmark provides a systematic and domain-specific foundation for advancing multimodal reasoning capabilities in materials science and for guiding the development of future LLMs with stronger figure-based understanding.