AI Navigate

Does the Question Really Matter? Training-Free Data Selection for Vision-Language SFT

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CVS, a training-free data selection method for vision-language instruction tuning that identifies samples requiring true cross-modal reasoning by measuring changes in answer validity when conditioning on the question.
  • CVS uses a frozen vision-language large model (VLLM) as an evaluator, avoiding costly proxy model training and filtering out samples that rely on linguistic shortcuts or contain semantic conflicts.
  • Experimental results on Vision-Flan and The Cauldron datasets demonstrate that CVS achieves better performance than full-data training using significantly less data, improving accuracy by up to 4.8% while also reducing computational costs substantially compared to other selection methods.
  • CVS is robust across diverse datasets and provides a more effective and efficient approach to sample selection in vision-language supervised fine-tuning (SFT), advancing multimodal learning efficiency.

Computer Science > Artificial Intelligence

arXiv:2603.09715 (cs)
[Submitted on 10 Mar 2026]

Title:Does the Question Really Matter? Training-Free Data Selection for Vision-Language SFT

View a PDF of the paper titled Does the Question Really Matter? Training-Free Data Selection for Vision-Language SFT, by Peng Sun and 5 other authors
View PDF HTML (experimental)
Abstract:Visual instruction tuning is crucial for improving vision-language large models (VLLMs). However, many samples can be solved via linguistic patterns or common-sense shortcuts, without genuine cross-modal reasoning, limiting the effectiveness of multimodal learning. Prior data selection methods often rely on costly proxy model training and focus on difficulty or diversity, failing to capture a sample's true contribution to vision-language joint reasoning. In this paper, we propose CVS, a training-free data selection method based on the insight that, for high-quality multimodal samples, introducing the question should substantially alter the model's assessment of answer validity given an image. CVS leverages a frozen VLLM as an evaluator and measures the discrepancy in answer validity with and without conditioning on the question, enabling the identification of samples that require vision-language joint reasoning while filtering semantic-conflict noise. Experiments on Vision-Flan and The Cauldron show that CVS achieves solid performance across datasets. On Vision-Flan, CVS outperforms full-data training by 3.5% and 4.8% using only 10% and 15% of the data, respectively, and remains robust on the highly heterogeneous Cauldron dataset. Moreover, CVS reduces computational cost by 17.3% and 44.4% compared to COINCIDE and XMAS.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09715 [cs.AI]
  (or arXiv:2603.09715v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09715
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Peng Sun [view email]
[v1] Tue, 10 Mar 2026 14:23:38 UTC (10,003 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.