Static and Dynamic Graph Alignment Network for Temporal Video Grounding

arXiv cs.CV / 5/4/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • Temporal Video Grounding (TVG) seeks to match natural-language queries to the correct temporal segments within untrimmed videos, and recent GCN-based approaches build clip-level temporal graphs to improve reasoning.
  • Existing GCN methods are limited by using only static or only dynamic node features, constructing temporal graphs in a query-agnostic way, and relying on single-granularity semantic matching that can slow convergence and hurt precision.
  • The proposed Static and Dynamic Graph Alignment Network (SDGAN) builds two complementary temporal graphs using both static and dynamic visual features, then aligns nodes position-wise to form a richer representation.
  • SDGAN adds query-clip contrastive learning and adaptive graph modeling to make the temporal graph explicitly query-aware, improving alignment between visual clips and textual queries.
  • It further uses multi-granularity temporal proposals with a progressive easy-to-hard training strategy to connect coarse localization with fine boundary refinement, achieving better results on three benchmark datasets and releasing code/data on GitHub.

Abstract

Temporal Video Grounding (TVG) aims to localize temporal moments in an untrimmed video that semantically correspond to given natural language queries. Recently, Graph Convolutional Networks (GCN) have been widely adopted in TVG to model temporal relations among video clips and enhance contextual reasoning by constructing clip-level graphs. Despite their effectiveness, existing GCN-based TVG methods encounter three critical bottlenecks: 1) Most methods construct graph nodes using either static or dynamic features alone, resulting in incomplete visual representation and overlooking complementary semantics, 2) Most methods construct temporal graphs in a query-agnostic manner, leading to inefficient feature interaction within the temporal graph representation, and 3) Most methods often suffer from a single-granularity semantic matching, while direct training on complex temporal localization task may lead to slow convergence and suboptimal precision. To address these challenges, we propose Static and Dynamic Graph Alignment Network (SDGAN). First, SDGAN jointly exploits static and dynamic visual features to construct two complementary temporal graphs and performs Position-wise Nodes Alignment, enabling more expressive and robust visual representation. Second, SDGAN introduces Query-Clip Contrastive Learning and Adaptive Graph Modeling to explicitly align visual clips with their corresponding textual queries, yielding query-aware visual representations. Third, SDGAN incorporates multi-granularity temporal proposals within Progressive Easy-to-Hard Training Strategy, effectively bridging coarse-grained semantic localization and fine-grained temporal boundary refinement. Extensive experiments on three benchmark datasets demonstrate that SDGAN achieves superior performance across complex TVG scenarios. Codes and datasets are available at https://github.com/ZhanJieHu/SDGAN.