Knowledge is Power: Advancing Few-shot Action Recognition with Multimodal Semantics from MLLMs

arXiv cs.CV / 3/30/2026

📰 News

Key Points

  • The paper proposes FSAR-LLaVA, an end-to-end few-shot action recognition method that uses multimodal large language models (e.g., Video-LLaVA) as a multimodal knowledge base rather than relying on a weaker feature→caption→feature pipeline.

Abstract

Multimodal Large Language Models (MLLMs) have propelled the field of few-shot action recognition (FSAR). However, preliminary explorations in this area primarily focus on generating captions to form a suboptimal feature->caption->feature pipeline and adopt metric learning solely within the visual space. In this paper, we propose FSAR-LLaVA, the first end-to-end method to leverage MLLMs (such as Video-LLaVA) as a multimodal knowledge base for directly enhancing FSAR. First, at the feature level, we leverage the MLLM's multimodal decoder to extract spatiotemporally and semantically enriched representations, which are then decoupled and enhanced by our Multimodal Feature-Enhanced Module into distinct visual and textual features that fully exploit their semantic knowledge for FSAR. Next, we leverage the versatility of MLLMs to craft input prompts that flexibly adapt to diverse scenarios, and use their aligned outputs to drive our designed Composite Task-Oriented Prototype Construction, effectively bridging the distribution gap between meta-train and meta-test sets. Finally, to enable multimodal features to guide metric learning jointly, we introduce a training-free Multimodal Prototype Matching Metric that adaptively selects the most decisive cues and efficiently leverages the decoupled feature representations produced by MLLMs. Extensive experiments demonstrate superior performance across various tasks with minimal trainable parameters.

Knowledge is Power: Advancing Few-shot Action Recognition with Multimodal Semantics from MLLMs | AI Navigate