SASI: Leveraging Sub-Action Semantics for Robust Early Action Recognition in Human-Robot Interaction

arXiv cs.RO / 5/1/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper addresses early human action recognition in human-robot interaction, aiming to identify actions from incomplete observations for rapid, proactive robot feedback.
  • It argues that leveraging sub-action semantics—since actions can be decomposed into smaller meaningful units—can provide richer hierarchical cues than approaches that only model whole actions.
  • The authors propose SASI, a framework that integrates graph convolution networks with sub-action semantic information via cross-modal fusion, using a segmentation model plus a skeleton-based graph convolution backbone.
  • SASI is reported to run in real time at 29 Hz and improves action recognition accuracy on the BABEL skeleton dataset with frame-level annotations, with further gains expected from better sub-action segmentation.
  • The method also shows strong performance on partial action sequences, supporting its suitability for robust early action recognition in proactive, seamless HRI.

Abstract

Understanding human actions is critical for advancing behavior analysis in human-robot interaction. Particularly in tasks that demand quick and proactive feedback, robots must recognize human actions as early as possible from incomplete observations. \textit{Sub-actions} offer the semantic and hierarchical cues needed for this, since human actions are inherently structured and can be decomposed into smaller, meaningful units. However, conventional approaches focus primarily on holistic actions and often overlook the rich semantic structure embedded in sub-actions, making them poorly suited for early recognition. To address this gap, we introduce SASI (Sub-Action Semantics Integrated cross-modal fusion), a novel framework that integrates existing graph convolution networks to fuse spatiotemporal features with sub-action semantics. SASI exploits a segmentation model with a traditional skeleton-based graph convolution network, capturing both fine-grained sub-action semantics and overall spatial context, while operating in real-time at 29 Hz. Experiments on BABEL, a skeleton-based dataset with frame-level annotations, demonstrate that our method improves recognition accuracy over conventional approaches, with additional gains expected as the quality of sub-action segmentation improves. Notably, SASI also achieves superior performance in understanding partial action sequences, revealing its capability for early recognition, which is essential for proactive and seamless Human-Robot Interaction (HRI). Code is available at https://anonymous.4open.science/r/SASI .