FOCAL-Attention for Heterogeneous Multi-Label Prediction

arXiv cs.LG / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper studies multi-label node classification on heterogeneous graphs, highlighting difficulties from structural heterogeneity and the need to share representations across labels.
  • It analyzes why current approaches can fail: expanding neighborhoods can dilute attention to primary (task-critical) regions, and meta-path constraints create a tradeoff between insufficient coverage and semantic dilution.
  • The authors propose FOCAL (Fusion Of Coverage and Anchoring Learning) to address the coverage–anchoring conflict by combining two attention mechanisms.
  • FOCAL uses coverage-oriented attention (COA) for flexible aggregation over heterogeneous context, and anchoring-oriented attention (AOA) to restrict aggregation to meta-path-induced primary semantics.
  • The paper reports both theoretical justification and experimental results showing FOCAL outperforms existing state-of-the-art methods for the task.

Abstract

Heterogeneous graphs have attracted increasing attention for modeling multi-typed entities and relations in complex real-world systems. Multi-label node classification on heterogeneous graphs is challenging due to structural heterogeneity and the need to learn shared representations across multiple labels. Existing methods typically adopt either flexible attention mechanisms or meta-path constrained anchoring, but in heterogeneous multi-label prediction they often suffer from semantic dilution or coverage constraint. Both issues are further amplified under multi-label supervision. We present a theoretical analysis showing that as heterogeneous neighborhoods expand, the attention mass allocated to task-critical (primary) neighborhoods diminishes, and that meta-path constrained aggregation exhibits a dilemma: too few meta-paths intensify coverage constraint, while too many re-introduce dilution. To resolve this coverage-anchoring conflict, we propose FOCAL: Fusion Of Coverage and Anchoring Learning, with two components: coverage-oriented attention (COA) for flexible, unconstrained heterogeneous context aggregation, and anchoring-oriented attention (AOA) that restricts aggregation to meta-path-induced primary semantics. Our theoretical analysis and experimental results further indicates that FOCAL has a better performance than other state-of-the-art methods.