Towards Universal Skeleton-Based Action Recognition

arXiv cs.CV / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets “universal” skeleton-based action recognition in real-world robotics, where skeleton data can be heterogeneous due to different human and humanoid robot sources.
  • It introduces the Heterogeneous Open-Vocabulary (HOV) Skeleton dataset by integrating and refining multiple large-scale skeleton action datasets to support open-vocabulary settings.
  • The authors propose a Transformer-based framework with unified skeleton representation, a motion encoder for skeletons, and multi-grained motion–text alignment.
  • The approach uses multi-level contrastive learning (global, stream-specific, and fine-grained) to align learned motion representations with text embeddings.
  • Experiments on common benchmarks with heterogeneous skeletons show improved effectiveness and generalization, and the code is released on GitHub.

Abstract

With the development of robotics, skeleton-based action recognition has become increasingly important, as human-robot interaction requires understanding the actions of humans and humanoid robots. Due to different sources of human skeletons and structures of humanoid robots, skeleton data naturally exhibit heterogeneity. However, previous works overlook the data heterogeneity of skeletons and solely construct models using homogeneous skeletons. Moreover, open-vocabulary action recognition is also essential for real-world applications. To this end, this work studies the challenging problem of heterogeneous skeleton-based action recognition with open vocabularies. We construct a large-scale Heterogeneous Open-Vocabulary (HOV) Skeleton dataset by integrating and refining multiple representative large-scale skeleton-based action datasets. To address universal skeleton-based action recognition, we propose a Transformer-based model that comprises three key components: unified skeleton representation, motion encoder for skeletons, and multi-grained motion-text alignment. The motion encoder feeds multi-modal skeleton embeddings into a two-stream Transformer-based encoder to learn spatio-temporal action representations, which are then mapped to a semantic space to align with text embeddings. Multi-grained motion-text alignment incorporates contrastive learning at three levels: global instance alignment, stream-specific alignment, and fine-grained alignment. Extensive experiments on popular benchmarks with heterogeneous skeleton data demonstrate both the effectiveness and the generalization ability of the proposed method. Code is available at https://github.com/jidongkuang/Universal-Skeleton.