AI Navigate

Towards Visual Query Segmentation in the Wild

arXiv cs.CV / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • The paper introduces visual query segmentation (VQS), a new paradigm that segments all pixel-level occurrences of an object in an untrimmed video given a visual query, improving over existing visual query localization (VQL) methods that only locate the last appearance with bounding boxes.
  • A large-scale benchmark dataset named VQS-4K is presented, consisting of 4,111 videos, over 1.3 million frames, and 222 object categories, with carefully manually annotated spatial-temporal masklets for each queried target.
  • The authors propose VQ-SAM, a method extending SAM 2, which uses target-specific and background distractor cues through a multi-stage framework with an adaptive memory generation module to enhance segmentation accuracy.
  • Experimental results on VQS-4K show that VQ-SAM significantly outperforms existing approaches, demonstrating the effectiveness of the proposed method and setting a new standard for future research in visual query segmentation.
  • The dataset, code, and results will be publicly released to promote further study and practical applications beyond current visual query localization paradigms.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.08898 (cs)
[Submitted on 9 Mar 2026]

Title:Towards Visual Query Segmentation in the Wild

View a PDF of the paper titled Towards Visual Query Segmentation in the Wild, by Bing Fan and 8 other authors
View PDF HTML (experimental)
Abstract:In this paper, we introduce visual query segmentation (VQS), a new paradigm of visual query localization (VQL) that aims to segment all pixel-level occurrences of an object of interest in an untrimmed video, given an external visual query. Compared to existing VQL locating only the last appearance of a target using bounding boxes, VQS enables more comprehensive (i.e., all object occurrences) and precise (i.e., pixel-level masks) localization, making it more practical for real-world scenarios. To foster research on this task, we present VQS-4K, a large-scale benchmark dedicated to VQS. Specifically, VQS-4K contains 4,111 videos with more than 1.3 million frames and covers a diverse set of 222 object categories. Each video is paired with a visual query defined by a frame outside the search video and its target mask, and annotated with spatial-temporal masklets corresponding to the queried target. To ensure high quality, all videos in VQS-4K are manually labeled with meticulous inspection and iterative refinement. To the best of our knowledge, VQS-4K is the first benchmark specifically designed for VQS. Furthermore, to stimulate future research, we present a simple yet effective method, named VQ-SAM, which extends SAM 2 by leveraging target-specific and background distractor cues from the video to progressively evolve the memory through a novel multi-stage framework with an adaptive memory generation (AMG) module for VQS, significantly improving the performance. In our extensive experiments on VQS-4K, VQ-SAM achieves promising results and surpasses all existing approaches, demonstrating its effectiveness. With the proposed VQS-4K and VQ-SAM, we expect to go beyond the current VQL paradigm and inspire more future research and practical applications on VQS. Our benchmark, code, and results will be made publicly available.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.08898 [cs.CV]
  (or arXiv:2603.08898v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.08898
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Bing Fan [view email]
[v1] Mon, 9 Mar 2026 20:09:04 UTC (8,700 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.