Collaboration of Fusion and Independence: Hypercomplex-driven Robust Multi-Modal Knowledge Graph Completion

arXiv cs.CL / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Multi-modal knowledge graph completion (MMKGC) seeks to predict missing facts in multi-modal knowledge graphs by using both graph structure and entity information across modalities.
  • Prior approaches are split between fusion-based methods that can discard modality-specific details via fixed fusion, and ensemble-based methods that keep modalities independent but may miss context-dependent cross-modal semantic interactions.
  • The paper introduces M-Hyper, a hypercomplex-driven model that jointly supports both fused and independent modality representations to enable flexible cross-modal collaboration.
  • Building on quaternion and biquaternion algebra, M-Hyper uses orthogonal bases to represent multiple independent modalities and a Hamilton product to model pair-wise modality interactions efficiently.
  • It proposes FERF and R2MF modules to generate robust representations for three independent modalities plus one fused modality, and experiments show state-of-the-art performance with robustness and computational efficiency.

Abstract

Multi-modal knowledge graph completion (MMKGC) aims to discover missing facts in multi-modal knowledge graphs (MMKGs) by leveraging both structural relationships and diverse modality information of entities. Existing MMKGC methods follow two multi-modal paradigms: fusion-based and ensemble-based. Fusion-based methods employ fixed fusion strategies, which inevitably leads to the loss of modality-specific information and a lack of flexibility to adapt to varying modality relevance across contexts. In contrast, ensemble-based methods retain modality independence through dedicated sub-models but struggle to capture the nuanced, context-dependent semantic interplay between modalities. To overcome these dual limitations, we propose a novel MMKGC method M-Hyper, which achieves the coexistence and collaboration of fused and independent modality representations. Our method integrates the strengths of both paradigms, enabling effective cross-modal interactions while maintaining modality-specific information. Inspired by ``quaternion'' algebra, we utilize its four orthogonal bases to represent multiple independent modalities and employ the Hamilton product to efficiently model pair-wise interactions among them. Specifically, we introduce a Fine-grained Entity Representation Factorization (FERF) module and a Robust Relation-aware Modality Fusion (R2MF) module to obtain robust representations for three independent modalities and one fused modality. The resulting four modality representations are then mapped to the four orthogonal bases of a biquaternion (a hypercomplex extension of quaternion) for comprehensive modality interaction. Extensive experiments indicate its state-of-the-art performance, robustness, and computational efficiency.