AI Navigate

GraphVLM: Benchmarking Vision Language Models for Multimodal Graph Learning

arXiv cs.CV / 3/17/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • GraphVLM presents a systematic benchmark to evaluate vision-language models for multimodal graph learning.
  • It studies three integration paradigms—VLM-as-Encoder, VLM-as-Aligner, and VLM-as-Predictor—to fuse multimodal features, bridge modalities for structured reasoning, and serve as backbones for graph learning.
  • Across six diverse datasets, experiments show that VLMs enhance multimodal graph learning in all three roles, with the VLM-as-Predictor providing the strongest gains.
  • The benchmark code is publicly available on GitHub, enabling researchers to reproduce results and compare methods.

Abstract

Vision-Language Models (VLMs) have demonstrated remarkable capabilities in aligning and understanding multimodal signals, yet their potential to reason over structured data, where multimodal entities are connected through explicit relational graphs, remains largely underexplored. Unlocking this capability is crucial for real-world applications such as social networks, recommendation systems, and scientific discovery, where multimodal information is inherently structured. To bridge this gap, we present GraphVLM, a systematic benchmark designed to evaluate and harness the capabilities of VLMs for multimodal graph learning (MMGL). GraphVLM investigates three complementary paradigms for integrating VLMs with graph reasoning: (1) VLM-as-Encoder, which enriches graph neural networks through multimodal feature fusion; (2) VLM-as-Aligner, which bridges modalities in latent or linguistic space to facilitate LLM-based structured reasoning; and (3) VLM-as-Predictor, which directly employs VLMs as multimodal backbones for graph learning tasks. Extensive experiments across six datasets from diverse domains demonstrate that VLMs enhance multimodal graph learning via all three roles. Among these paradigms, VLM-as-Predictor achieves the most substantial and consistent performance gains, revealing the untapped potential of vision-language models as a new foundation for multimodal graph learning. The benchmark code is publicly available at https://github.com/oamyjin/GraphVLM.