AI Navigate

Training-Only Heterogeneous Image-Patch-Text Graph Supervision for Advancing Few-Shot Learning Adapters

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors introduce a training-only heterogeneous image-patch-text graph teacher that runs during training to capture cross-modal relations among multi-scale visual patches and text prompts.
  • The teacher uses a Modality-aware Graph Transformer to perform deep cross-modal reasoning and applies discriminative node filtering to extract high-fidelity class features.
  • They employ a cache-aware dual-objective strategy to supervise relational knowledge into the Tip-Adapter's key-value cache, upgrading prototypes while the graph teacher is discarded at test time with no extra inference cost.
  • Experiments on standard 1-16-shot benchmarks report state-of-the-art performance and ablations show the importance of auxiliary graph supervision, text-guided reasoning, and node filtering.
  • Code is available at https://github.com/MR-Sherif/TOGA.git.

Abstract

Recent adapter-based CLIP tuning (e.g., Tip-Adapter) is a strong few-shot learner, achieving efficiency by caching support features for fast prototype matching. However, these methods rely on global uni-modal feature vectors, overlooking fine-grained patch relations and their structural alignment with class text. To bridge this gap without incurring inference costs, we introduce a novel asymmetric training-only framework. Instead of altering the lightweight adapter, we construct a high-capacity auxiliary Heterogeneous Graph Teacher that operates solely during training. This teacher (i) integrates multi-scale visual patches and text prompts into a unified graph, (ii) performs deep cross-modal reasoning via a Modality-aware Graph Transformer (MGT), and (iii) applies discriminative node filtering to extract high-fidelity class features. Crucially, we employ a cache-aware dual-objective strategy to supervise this relational knowledge directly into the Tip-Adapter's key-value cache, effectively upgrading the prototypes while the graph teacher is discarded at test time. Thus, inference remains identical to Tip-Adapter with zero extra latency or memory. Across standard 1-16-shot benchmarks, our method consistently establishes a new state-of-the-art. Ablations confirm that the auxiliary graph supervision, text-guided reasoning, and node filtering are the essential ingredients for robust few-shot adaptation. Code is available at https://github.com/MR-Sherif/TOGA.git.