AI Navigate

SPARROW: Learning Spatial Precision and Temporal Referential Consistency in Pixel-Grounded Video MLLMs

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • SPARROW introduces Target-Specific Tracked Features (TSF) to inject temporally aligned referent cues during training and a dual-prompt design that decodes box and segmentation tokens to fuse geometric priors with semantic grounding for pixel-grounded video MLLMs.
  • It operates end-to-end without external detectors, leveraging a SAM2-based proposer, and has been integrated into three open-source video MLLMs (UniPixel, GLUS, VideoGLaMM) with consistent performance gains.
  • The approach is evaluated on a curated referential video dataset of 30,646 videos and 45,231 Q&A pairs, achieving improvements such as up to +8.9 J&F on RVOS, +5 mIoU on visual grounding, and +5.4 CLAIR on GCG.
  • Overall, SPARROW substantially improves referential stability, spatial precision, and temporal coherence in pixel-grounded video understanding, signaling stronger temporally consistent grounding for video AI systems.

Abstract

Multimodal large language models (MLLMs) have advanced from image-level reasoning to pixel-level grounding, but extending these capabilities to videos remains challenging as models must achieve spatial precision and temporally consistent reference tracking. Existing video MLLMs often rely on a static segmentation token ([SEG]) for frame-wise grounding, which provides semantics but lacks temporal context, causing spatial drift, identity switches, and unstable initialization when objects move or reappear. We introduce SPARROW, a pixel-grounded video MLLM that unifies spatial accuracy and temporal stability through two key components: (i) Target-Specific Tracked Features (TSF), which inject temporally aligned referent cues during training, and (ii) a dual-prompt design that decodes box ([BOX]) and segmentation ([SEG]) tokens to fuse geometric priors with semantic grounding. SPARROW is supported by a curated referential video dataset of 30,646 videos and 45,231 Q&A pairs and operates end-to-end without external detectors via a class-agnostic SAM2-based proposer. Integrated into three recent open-source video MLLMs (UniPixel, GLUS, and VideoGLaMM), SPARROW delivers consistent gains across six benchmarks, improving up to +8.9 J&F on RVOS, +5 mIoU on visual grounding, and +5.4 CLAIR on GCG. These results demonstrate that SPARROW substantially improves referential stability, spatial precision, and temporal coherence in pixel-grounded video understanding. Project page: https://risys-lab.github.io/SPARROW