SkeletonContext: Skeleton-side Context Prompt Learning for Zero-Shot Skeleton-based Action Recognition

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses zero-shot skeleton-based action recognition by tackling the semantic gap caused by missing contextual cues (e.g., objects) when aligning motion features with text embeddings.
  • It proposes SkeletonContext, which adds language-driven context to skeleton representations using a Cross-Modal Context Prompt Module that reconstructs masked contextual prompts via a pretrained language model guided by LLM-derived signals.
  • The method includes a Key-Part Decoupling Module to separate motion-relevant joints, improving robustness even when explicit object interactions are not present.
  • Experiments on multiple benchmarks show state-of-the-art results in both conventional and generalized zero-shot settings, particularly for fine-grained actions that look visually similar.
  • Overall, the approach demonstrates improved instance-level semantic grounding and cross-modal alignment by transferring contextual semantics from language into the skeleton encoder.

Abstract

Zero-shot skeleton-based action recognition aims to recognize unseen actions by transferring knowledge from seen categories through semantic descriptions. Most existing methods typically align skeleton features with textual embeddings within a shared latent space. However, the absence of contextual cues, such as objects involved in the action, introduces an inherent gap between skeleton and semantic representations, making it difficult to distinguish visually similar actions. To address this, we propose SkeletonContext, a prompt-based framework that enriches skeletal motion representations with language-driven contextual semantics. Specifically, we introduce a Cross-Modal Context Prompt Module, which leverages a pretrained language model to reconstruct masked contextual prompts under guidance derived from LLMs. This design effectively transfers linguistic context to the skeleton encoder for instance-level semantic grounding and improved cross-modal alignment. In addition, a Key-Part Decoupling Module is incorporated to decouple motion-relevant joint features, ensuring robust action understanding even in the absence of explicit object interactions. Extensive experiments on multiple benchmarks demonstrate that SkeletonContext achieves state-of-the-art performance under both conventional and generalized zero-shot settings, validating its effectiveness in reasoning about context and distinguishing fine-grained, visually similar actions.