ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding

arXiv cs.CL / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • ChemVLR is introduced as a chemical vision-language model that emphasizes interpretable reasoning during perception, rather than directly answering visual chemical questions as black boxes do.
  • The model performs fine-grained analysis by first explicitly identifying granular chemical descriptors (e.g., functional groups) before generating answers, aiming to expose reasoning paths for reaction and molecular understanding.
  • It uses a cross-modality reverse-engineering strategy plus a rigorous filtering pipeline to build a large-scale reasoning-and-captioning dataset with 760k high-quality samples covering molecular and reaction tasks.
  • A three-stage training framework is proposed to progressively develop perception and reasoning capabilities, supported by ablation studies validating the training and data generation choices.
  • Reported experiments claim state-of-the-art results, outperforming both proprietary models and domain-specific open-source baselines, with code and model weights planned for release on GitHub.

Abstract

While Vision-Language Models (VLMs) have demonstrated significant potential in chemical visual understanding, current models are predominantly optimized for direct visual question-answering tasks. This paradigm often results in "black-box" systems that fail to utilize the inherent capability of Large Language Models (LLMs) to infer underlying reaction mechanisms. In this work, we introduce ChemVLR, a chemical VLM designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems. To facilitate this methodology, we implement a cross-modality reverse-engineering strategy, combined with a rigorous filtering pipeline, to curate a large-scale reasoning-and-captioning dataset comprising 760k high-quality samples across molecular and reaction tasks. Furthermore, we adopt a three-stage training framework that systemically builds model perception and reasoning capacity. Experiments demonstrate that ChemVLR achieves state-of-the-art (SOTA) performance, surpassing both leading proprietary models and domain-specific open-source baselines. We also provide comprehensive ablation studies to validate our training strategy and data generation designs. Code and model weights will be available at https://github.com/xxlllz/ChemVLR.