TRIMMER: A New Paradigm for Video Summarization through Self-Supervised Reinforcement Learning

arXiv cs.CV / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper introduces TRIMMER, a self-supervised reinforcement learning framework designed to produce concise but semantically meaningful video summaries across domains with limited labeled data.
  • TRIMMER is trained in two stages: first learning robust representations via self-supervised learning, then performing spatio-temporal frame selection using reinforcement learning with information-theoretic reward functions.
  • Instead of similarity-based objectives, TRIMMER uses entropy-based metrics to better model higher-order temporal dynamics and semantic diversity, improving how long-range dependencies are captured.
  • Rewards are computed directly over the indices of selected frames, which reduces computational cost and helps the approach scale more efficiently.
  • Experiments on standard benchmarks show TRIMMER achieves state-of-the-art results among unsupervised/self-supervised methods and remains competitive with strong supervised approaches.

Abstract

The rapid growth of video content across domains such as surveillance, education, and social media has made efficient content understanding increasingly critical. Video summarization addresses this challenge by generating concise yet semantically meaningful representations, but existing approaches often rely on expensive manual annotations, struggle to generalize across domains, and incur significant computational costs due to complex architectures. Moreover, unsupervised and weakly supervised methods typically underperform compared to supervised counterparts in capturing long-range temporal dependencies and semantic structure. In this work, we propose TRIMMER (Temporal Relative Information Maximization for Multi-objective Efficient Reinforcement), a novel self-supervised reinforcement learning framework for video summarization. TRIMMER operates in two stages: it first learns robust representations via self-supervised learning and then performs spatio-temporal decision making through reinforcement learning guided by information-theoretic reward functions. Unlike prior approaches that rely on similarity-based objectives, our method introduces entropy-based metrics to capture higher-order temporal dynamics and semantic diversity, while computing rewards directly over selected frame indices to improve computational efficiency. Extensive experiments on standard benchmarks demonstrate that TRIMMER achieves state-of-the-art performance among unsupervised and self-supervised methods, while remaining competitive with leading supervised approaches, highlighting its effectiveness for scalable and generalizable video summarization.