Channel Attention-Guided Cross-Modal Knowledge Distillation for Referring Image Segmentation

arXiv cs.CV / 4/21/2026

📰 NewsModels & Research

Key Points

  • Referring image segmentation (RIS) is described as a cross-modal task that links language descriptions to precise target-region segmentation in images.
  • The paper addresses the deployment challenge of large vision-language models by proposing a channel attention-guided cross-modal knowledge distillation approach.
  • The method transfers high-order fine-grained vision-language correlations from a teacher model, along with semantic component correlations captured per channel, to a smaller student model.
  • Compared with pixel-wise relational distillation, the approach aims to reduce transfer of teacher learning bias while preserving some student autonomy in learning.
  • Experiments on two public datasets indicate that the student model gains significant performance improvements without adding inference-time parameters.

Abstract

Referring image segmentation (RIS) requires accurate segmentation of target regions in images according to language descriptions, which is a cross-modal task integrating vision and language. Existing RIS methods typically employ large-scale vision and language encoding models to improve performance, but their enormous parameter size severely restricts deployment in scenarios with limited computing resources. To solve this problem, this paper proposes a channel attention-guided cross-modal knowledge distillation method, which transfers the high-order fine-grained correlations between vision and language learned by the teacher network, as well as the correlations between semantic components represented by each channel, to the student network. Compared with the traditional pixel-wise relational distillation, this method not only enables the student to learn the knowledge of the teacher, but also retains part of its independent learning ability, alleviating the transfer of learning bias. Experimental results on two public datasets show that the proposed distillation method does not introduce additional parameters during inference and can achieve significant performance improvement for the student model.