CID-TKG: Collaborative Historical Invariance and Evolutionary Dynamics Learning for Temporal Knowledge Graph Reasoning

arXiv cs.AI / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses temporal knowledge graph (TKG) reasoning, which predicts future facts at unseen timestamps despite entities and relations evolving over time.
  • It introduces CID-TKG, a collaborative learning framework that explicitly incorporates historical invariance (long-term structural regularities) and evolutionary dynamics (short-term temporal transitions) as inductive biases.
  • CID-TKG builds two separate graphs—historical invariance and evolutionary dynamics—and uses dedicated encoders to learn representations from each, improving how the model handles time-related patterns.
  • To reduce semantic mismatches between the two graph “views,” the method decomposes relations into view-specific representations and aligns query representations across views using a contrastive learning objective.
  • Experiments report state-of-the-art performance for extrapolation settings, suggesting better generalization to unseen future times than prior approaches.

Abstract

Temporal knowledge graph (TKG) reasoning aims to infer future facts at unseen timestamps from temporally evolving entities and relations. Despite recent progress, existing approaches still suffer from inherent limitations due to their inductive biases, as they predominantly rely on time-invariant or weakly time-dependent structures and overlook the evolutionary dynamics. To overcome this limitation, we propose a novel collaborative learning framework for TKGR (dubbed CID-TKG) that integrates evolutionary dynamics and historical invariance semantics as an effective inductive bias for reasoning. Specifically, CID-TKG constructs a historical invariance graph to capture long-term structural regularities and an evolutionary dynamics graph to model short-term temporal transitions. Dedicated encoders are then employed to learn representations from each structure. To alleviate semantic discrepancies across the two structures, we decompose relations into view-specific representations and align view-specific query representations via a contrastive objective, which promotes cross-view consistency while suppressing view-specific noise. Extensive experiments verify that our CID-TKG achieves state-of-the-art performance under extrapolation settings.