Action Motifs: Self-Supervised Hierarchical Representation of Human Body Movements

arXiv cs.CV / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a hierarchical representation of human motion using “Action Atoms” (atomic joint movements) and “Action Motifs” (temporally composed patterns shared across actions).
  • It introduces A4Mer, a nested latent Transformer that learns this structure from human 3D pose data in a fully self-supervised way by segmenting pose sequences into variable-length latent tokens.
  • The method uses a unified masked-token prediction pretext task in the latent spaces of both Action Atoms and Action Motifs to enable bottom-up temporal pattern discovery.
  • To support training and evaluation, the authors release Action Motif Dataset (AMD), a multi-view human video dataset with full SMPL annotations, using foot-mounted cameras to produce frame-wise labels under frequent occlusions.
  • Experiments indicate A4Mer improves downstream human behavior modeling tasks such as action recognition, motion prediction, and motion interpolation by extracting meaningful Action Motifs.

Abstract

Effective human behavior modeling requires a representation of the human body movement that capitalizes on its compositionality. We propose a hierarchical representation consisting of Action Atoms that capture the atomic joint movements and Action Motifs that are formed by their temporal compositions and encode similar body movements found across different overall human actions. We derive A4Mer, a nested latent Transformer to learn this hierarchical representation from human pose data in a fully self-supervised manner. A4Mer splits a 3D pose sequence into variable-length segments and represents each segment as a single latent token (Action Atoms). Through bottom-up representation learning, temporal patterns composed of these Action Atoms, which capture meaningful temporal spans of reusable, semantic segments of body movements, naturally emerge (Action Motifs). A4Mer achieves this with a unified pretext task of masked token prediction in their respective latent spaces. We also introduce Action Motif Dataset (AMD), a large-scale dataset of multi-view human behavior videos with full SMPL annotations. We introduce a novel use of cameras by mounting them on the feet to achieve their frame-wise annotations despite frequent and heavy body occlusions. Experimental results demonstrate the effectiveness of A4Mer for extracting meaningful Action Motifs, which significantly benefit human behavior modeling tasks including action recognition, motion prediction, and motion interpolation.