VIRST: Video-Instructed Reasoning Assistant for SpatioTemporal Segmentation

arXiv cs.CV / 3/31/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces VIRST, an end-to-end Video-Instructed Reasoning Assistant designed for Referring Video Object Segmentation that addresses failures of keyframe-based RVOS pipelines on fast motion and reasoning-heavy queries.
  • VIRST unifies global video reasoning with pixel-level mask prediction in a single model rather than coupling a vision-language model with a separate propagation module.
  • The Spatio-Temporal Fusion (STF) module bridges semantic and segmentation representations by injecting segmentation-aware video features into the vision-language backbone.
  • A Temporal Dynamic Anchor Updater maintains temporally adjacent anchor frames to provide stable temporal cues despite large motion, occlusion, and object reappearance.
  • Experiments report state-of-the-art performance across multiple RVOS benchmarks and strong generalization for both referring and reasoning-oriented settings, with code and checkpoints released on GitHub.

Abstract

Referring Video Object Segmentation (RVOS) aims to segment target objects in videos based on natural language descriptions. However, fixed keyframe-based approaches that couple a vision language model with a separate propagation module often fail to capture rapidly changing spatiotemporal dynamics and to handle queries requiring multi-step reasoning, leading to sharp performance drops on motion-intensive and reasoning-oriented videos beyond static RVOS benchmarks. To address these limitations, we propose VIRST (Video-Instructed Reasoning Assistant for Spatio-Temporal Segmentation), an end-to-end framework that unifies global video reasoning and pixel-level mask prediction within a single model. VIRST bridges semantic and segmentation representations through the Spatio-Temporal Fusion (STF), which fuses segmentation-aware video features into the vision-language backbone, and employs the Temporal Dynamic Anchor Updater to maintain temporally adjacent anchor frames that provide stable temporal cues under large motion, occlusion, and reappearance. This unified design achieves state-of-the-art results across diverse RVOS benchmarks under realistic and challenging conditions, demonstrating strong generalization to both referring and reasoning oriented settings. The code and checkpoints are available at https://github.com/AIDASLab/VIRST.