AI Navigate

Are a Thousand Words Better Than a Single Picture? Beyond Images -- A Framework for Multi-Modal Knowledge Graph Dataset Enrichment

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Beyond Images introduces a three-stage data-centric pipeline for enriching multi-modal knowledge graphs: large-scale retrieval of additional entity-related images, conversion of all visuals into textual descriptions, and an LLM-based fusion that generates concise, entity-aligned summaries.
  • The approach converts ambiguous or noisy visuals into text to contribute usable semantics without changing standard MMKG model architectures or loss functions.
  • Empirical results show consistent gains across three public MMKG datasets and multiple baselines, with up to 7% Hits@1 improvements, and dramatic improvements on visually ambiguous logos and symbols (e.g., 201.35% MRR and 333.33% Hits@1).
  • A lightweight Text-Image Consistency Check Interface is released for optional targeted audits to improve description quality and dataset reliability.
  • The work is accompanied by code, datasets, and supplementary materials at the project repository, underscoring the practicality of scaling image coverage and text-based descriptions for MMKG completion.

Abstract

Multi-Modal Knowledge Graphs (MMKGs) benefit from visual information, yet large-scale image collection is hard to curate and often excludes ambiguous but relevant visuals (e.g., logos, symbols, abstract scenes). We present Beyond Images, an automatic data-centric enrichment pipeline with optional human auditing. This pipeline operates in three stages: (1) large-scale retrieval of additional entity-related images, (2) conversion of all visual inputs into textual descriptions to ensure that ambiguous images contribute usable semantics rather than noise, and (3) fusion of multi-source descriptions using a large language model (LLM) to generate concise, entity-aligned summaries. These summaries replace or augment the text modality in standard MMKG models without changing their architectures or loss functions. Across three public MMKG datasets and multiple baseline models, we observe consistent gains (up to 7% Hits@1 overall). Furthermore, on a challenging subset of entities with visually ambiguous logos and symbols, converting images into text yields large improvements (201.35% MRR and 333.33% Hits@1). Additionally, we release a lightweight Text-Image Consistency Check Interface for optional targeted audits, improving description quality and dataset reliability. Our results show that scaling image coverage and converting ambiguous visuals into text is a practical path to stronger MMKG completion. Code, datasets, and supplementary materials are available at https://github.com/pengyu-zhang/Beyond-Images.