AI Navigate

Mutually Causal Semantic Distillation Network for Zero-Shot Learning

arXiv cs.CV / 3/19/2026

📰 NewsModels & Research

Key Points

  • The work identifies limitations of prior unidirectional attention methods in zero-shot learning and proposes a mutually causal framework to distill semantic knowledge between visual and attribute features.
  • MSDN++ comprises two sub-nets: an attribute-to-visual causal attention path and a visual-to-attribute causal attention path, encouraging mutual learning of causal vision-attribute associations.
  • A semantic distillation loss guides the two sub-nets to teach each other during training, yielding more reliable semantic representations.
  • Experimental results on benchmark datasets such as CUB, SUN, AWA2, and FLO achieve new state-of-the-art performance, demonstrating significant improvements over strong baselines.

Abstract

Zero-shot learning (ZSL) aims to recognize the unseen classes in the open-world guided by the side-information (e.g., attributes). Its key task is how to infer the latent semantic knowledge between visual and attribute features on seen classes, and thus conducting a desirable semantic knowledge transfer from seen classes to unseen ones. Prior works simply utilize unidirectional attention within a weakly-supervised manner to learn the spurious and limited latent semantic representations, which fail to effectively discover the intrinsic semantic knowledge (e.g., attribute semantic) between visual and attribute features. To solve the above challenges, we propose a mutually causal semantic distillation network (termed MSDN++) to distill the intrinsic and sufficient semantic representations for ZSL. MSDN++ consists of an attribute\rightarrowvisual causal attention sub-net that learns attribute-based visual features, and a visual\rightarrowattribute causal attention sub-net that learns visual-based attribute features. The causal attentions encourages the two sub-nets to learn causal vision-attribute associations for representing reliable features with causal visual/attribute learning. With the guidance of semantic distillation loss, the two mutual attention sub-nets learn collaboratively and teach each other throughout the training process. Extensive experiments on three widely-used benchmark datasets (e.g., CUB, SUN, AWA2, and FLO) show that our MSDN++ yields significant improvements over the strong baselines, leading to new state-of-the-art performances.