Exploring High-Order Self-Similarity for Video Understanding

arXiv cs.CV / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper proposes exploring higher-order space-time self-similarity (STSS) to capture richer temporal dynamics and show that different STSS orders expose different motion-related aspects.
  • It introduces the Multi-Order Self-Similarity (MOSS) module, a lightweight neural component that learns and integrates multi-order STSS features.
  • The authors report that MOSS improves performance on multiple video tasks—including action recognition, motion-centric video VQA, and real-world robotic applications—while adding only marginal compute and memory overhead.
  • Extensive experiments indicate MOSS can act as a general temporal modeling module across diverse domains, with code and checkpoints planned for public release.

Abstract

Space-time self-similarity (STSS), which captures visual correspondences across frames, provides an effective way to represent temporal dynamics for video understanding. In this work, we explore higher-order STSS and demonstrate how STSSs at different orders reveal distinct aspects of these dynamics. We then introduce the Multi-Order Self-Similarity (MOSS) module, a lightweight neural module designed to learn and integrate multi-order STSS features. It can be applied to diverse video tasks to enhance motion modeling capabilities while consuming only marginal computational cost and memory usage. Extensive experiments on video action recognition, motion-centric video VQA, and real-world robotic tasks consistently demonstrate substantial improvements, validating the broad applicability of MOSS as a general temporal modeling module. The source code and checkpoints will be publicly available.