MaterialFigBENCH: benchmark dataset with figures for evaluating college-level materials science problem-solving abilities of multimodal large language models
arXiv cs.CL / 3/13/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- MaterialFigBench is a benchmark dataset designed to evaluate multimodal large language models on university-level materials science problems that require accurate interpretation of figures such as phase diagrams and diffusion patterns.
- The dataset comprises 137 free-response problems adapted from standard materials science textbooks, spanning topics including crystal structures, mechanical properties, diffusion, phase diagrams, phase transformations, and electronic properties.
- To mitigate ambiguity in reading numerical values from images, expert-defined answer ranges are provided where appropriate.
- The paper evaluates several state-of-the-art multimodal LLMs, including ChatGPT and GPT models accessed through OpenAI APIs, and analyzes performance across problem categories and model versions.
- Findings show that while overall accuracy improves with model updates, current LLMs still struggle with genuine visual understanding and quantitative interpretation of materials science figures, with many correct answers arising from memorized domain knowledge rather than image reading; the benchmark highlights weaknesses in visual reasoning, numerical precision, and significant-digit handling, while also identifying problem types where performance has improved.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to