High-Speed Vision Improves Zero-Shot Semantic Understanding of Human Actions

arXiv cs.CV / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how temporal resolution (frame rate) affects zero-shot semantic understanding of human actions from video, which is important for human-robot interaction in cases where labeled data is scarce.
  • It proposes a training-free pipeline that uses a pre-trained video-language model to produce semantic representations and then applies LLM-based reasoning to compare actions pairwise.
  • Experiments on kendo (a fast, subtle-motion domain) across 120 Hz, 60 Hz, and 30 Hz show that higher frame rates substantially improve semantic separability for rapid actions.
  • The study also examines how tracking-derived human joint information performs under full versus partial observations, finding that high-speed video yields more stable and interpretable semantics under a nearest-class prototype evaluation approach.
  • The results suggest that improving temporal fidelity can meaningfully enhance zero-shot action recognition without task-specific training, especially for fast, fine-grained motions.

Abstract

Understanding human actions from visual observations is essential for human--robot interaction, particularly when semantic interpretation of unfamiliar or hard-to-annotate actions is required. In scenarios such as rapid and less common activities, collecting sufficient labeled data for supervised learning is challenging, making zero-shot approaches a practical alternative for semantic understanding without task-specific training. While recent advances in large-scale pretrained models enable such zero-shot reasoning, the impact of temporal resolution, especially for rapid and fine-grained motions, remains underexplored. In this study, we investigate how temporal resolution affects zero-shot semantic understanding of high-speed human actions. Using kendo as a representative case of rapid and subtle motion patterns, we propose a training-free pipeline that combines a pre-trained video-language model for semantic representation with large language model-based reasoning for pairwise action comparison. Through controlled experiments across multiple frame rates (120 Hz, 60 Hz, and 30 Hz), we show that higher temporal resolution significantly improves semantic separability in zero-shot settings. We further analyze the role of tracking-based human joint information under both full and partial observation scenarios. Quantitative evaluation using a nearest-class prototype strategy demonstrates that high-speed video provides more stable and interpretable semantic representations for fast actions. These findings highlight the importance of temporal resolution in training-free action recognition and suggest that high-speed perception can enhance semantic understanding capabilities.